text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In this time when people get mostly hit by email or network worms, it's typical that an infected computer might have just couple of infected files, or even just one. Which might explain why we've been getting confused reports from people who've been hit by some of the latest Lovgate variants. Lovgate spreads through a variety of ways, one of which is a "companion" infection. A companion virus will rename its target file to make the user run the virus rather than the real program. For example, Lovate.AE will locate EXE files on the hard drive, rename them to have an ".ZMX" extension instead of ".EXE" and drops itself as an .EXE file to the same directory with the same name. Lovgate.AH does the same but uses ".~EX" as the extension. So for example a directory like this: Will end up looking like this: The virus might do this renaming operation to hundreds of EXE files in one go. End result: instead of finding one or two infected files, the user will find masses of them. With Lovgate, this is normal. Companion viruses are really an old idea. In the early 1990s, they typically worked by simply dropping a program called FILE.COM if FILE.EXE existed in the same directory, exploiting the DOS execution order. For example, see the HLLC.Plane featured in our Update Bulletin 2.25 from April 1996: http://www.f-secure.com/virus-info/bulletins/bull-225.shtml.
<urn:uuid:cf0ff9ba-58e4-4e0b-bd10-fa45c2d2ad36>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00000229.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00458-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943508
328
2.765625
3
Given all the recent and historical news on data breaches of personal e-mail accounts, social media accounts and even phone account passwords, it is every wonder therefore that we are still using password combinations that are incredibly easy to guess. Typically most users will maintain a single password for almost all sites they access. Passwords such as these are dangerous because they are the first attempted combinations in the arsenal of attackers brute-force access tools. The challenge is that cyber criminals are well aware that many of their targets still fail to employ a strong password policy and as such will “pre-load” their dictionary attacks for brute-force access with the combinations listed; which in turn means almost instant access to a substantial number of users personal data. If an attacker can compromise even a single password from a user, it can mean “carte-blanche” for access to other sites and systems thereafter. It’s clear that the strongest security controls rely on good password strength and regular changes; which if followed well, can often be the Achilles heel in attackers continued access to systems. The reality we face in today’s threat minefield is that human error is the highest contributing factor as to why threats both exist and attackers succeed in exploiting their targets. Bad actors (hackers) are well aware that we are only as strong as our weakest link; this is why they have increasingly turned their focus to the tried and tested method of social engineering, including brute force attacks against systems and servers protected by weak passwords (or in far too many cases default ones defined in user manuals). Unless you are in a position whereby you can store all your personal data offline of internet/cloud based services (which these days is practically impossible, given information held about individuals for banking, government and e-communication purposes), then your approach to better security should start by better education on what you can do to limit your exposure to threats or data-breaches and working to ensure that your most sensitive data is stored offline and not available on public hosted/cloud networks. Unfortunately however, even with complex passwords we are almost fighting a loosing battle; this is because cyber criminals can access botnet ecosystems to crack encrypted files or password protected data (through hashes of the password, or direct brute force attack) or make use of underground “cracking rigs” that use GPU’s Processors in rigs that can quite literally attempt billions of combinations per second. This means your average 8 character password (mandated by many online systems today) can be cracked in days. A great deal of research has gone into the minimum password length recommended; all users should be choosing passwords of at least 12 characters (alphanumeric with special characters) that are completely random and that would challenge even the most sophisticated decryption rigs for service out there on the cyber criminal underground. Regardless of the level of technology implemented to protect networks, systems and applications, if users share information they shouldn’t (passwords, account details, corporate data or personal identifiable information) or click on links that re-direct them to malicious malware then it makes things a great deal more difficult (albeit not impossible) to adequately protect ourselves in this insatiably online world. Keeping local security applications up-to-date and implementing programs that can inspect the links embedded in e-mails or social media messages for known malicious sources in a good step in assisting us with identification of potential harmful communications; however the “keep it simple” methodology in all online security endeavours will always provide a high level of personal protection against the latest scams. Overall there are two approaches to protecting your data; first is access to data stores (e-mail, social media, online file sharing) with a minimum of 12 character passwords and second, encrypted key data files with strong cipher algorithms. In the end, you want to make the cost of accessing that data far outweigh the value of it, or at least provide a level of assurance that by the time it could be theoretically accessed, it is no longer useful to the source that exfiltrated it. However even if you fail to do any of this, don’t make cyber criminals job any easier by choosing easy password phrases.
<urn:uuid:b6c14ef7-2120-43a3-a977-9ad8d7162e9a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/01/22/why-we-need-a-reality-check-on-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938999
859
2.890625
3
Chapter 7C – The Intel Pentium System The Pentium line is the culmination of Intel’s IA–32 architecture and, possibly, the beginning of the IA–64 architecture. In this sub–chapter, we examine the details of its design in a bit more detail. We shall note some features that have been created to support modern operating systems. In order to understand these features, we need to discuss the operating system As noted several times previously, a modern computer is a complete system. The major components of this system include the compiler (used to write programs for the computer), the motherboard (with its busses and mounting slots), the CPU, and the operating system. We begin this chapter with a basic definition from the study of operating systems. This is the necessarily vague definition of a process. The term “program” can mean many things, among which is the physical listing on paper of the high–level or assembly language text. When a program is executing, it acquires a number of assets (memory, registers, etc.) and becomes a process. Basically, a process is a program under execution, along with all the non–sharable assets required to support that execution. There is more to the definition than that, but this imprecise notion will support our discussion below. In memory management, the goal is to consider two processes logically executing on the computer at the same time, though probably executing sequentially, one after another in turn. The assets of each process (including the binary image of the executing code) must be protected from the other process. IA–32 Memory Segmentation Early computers ran programs mostly written by the users, with only a small amount of system software to support the user programs. The logical model of execution was that of a single program under execution; the user program would call the needed system routines as needed. As the operating system evolved, the execution model became more one of parallel processes, perhaps executing sequentially but better considered logically as executing in parallel. The system processes were best seen as separate from the user process, requiring protection from accidental corruption by the user program. Such protection requires some sort of hardware support for memory management. Basic to the idea of memory management is the definition of ranges of the address space that a particular process can access. In many modern computers, the address space is divided into logical segments. For each logical segment that a process can access, the hardware defines the starting address of that segment, the size of the segment, and access rights owned by the process. The later IA–32 implementations, including all Pentium models, supported three memory segmentation modes to facilitate memory management by the operating system. These are real mode, protected mode, and virtual 8086 mode [R018, page 586; R019, page 36]. Real mode implements the programming mode of the Intel 8086 almost exactly, with a few extra features to allow switching to other modes. This mode, when available, can be used to run MS–DOS programs that require direct access to system memory and hardware devices. Programs run in real mode can cause the operating system to crash. If a real mode program is one of many running on the computer at the time, all of the other programs crash as well. There is no protection among programs; the computer just stops responding to input. In this mode, the segment registers are used purely to calculate addresses; see the previous sub–chapter. There is one real–mode data structure that requires discussion, as it will lead to a more general data structure used in protected mode. This is the IVT (Interrupt Vector Table), which is used to activate software associated with a specific I/O device. We shall discuss I/O management, including I/O interrupts and I/O vectors in chapter 9 of this text. Here is the brief description of an input I/O operation to show the significance of the IVT. I/O device signals the CPU that it is ready to transfer data by asserting a signal called an “interrupt”. This is asserted low. the CPU is ready to handle the transfer, it sends out a signal, called an “acknowledge” to initiate the I/O process itself. a first step to the I/O process, the device that asserted the interrupt itself to the CPU. It does this by sending a vector, which is merely an address to select an entry in the IVT. The IVT should be considered as an array of entries, each of which contains the address of the program to handle a specific I/O device. 4. The ISR (Interrupt Service Routine) appropriate for the device begins to execute. is more to the story than this, but we have hit the essential idea of a single manage the input and output for all executing programs. Protected mode is the native state of the Pentium processor, in which all instructions and features are available. Programs are given separate memory areas called segments, and the processor uses the segment registers and associated other registers to manage access to memory, so that no program can reference memory outside its assigned area. The operating system is thus protected from intrusion by user programs. The operating system operates in a privileged state in which it can change the segment registers in order to access any area of memory. Virtual 8086 mode is a sub–mode of protected mode. In this mode, many of the features of protected mode are active. The processor can execute real–mode software in a safe multitasking environment. If a virtual 8086 mode process crashes or attempts to access memory in areas reserved for other processes or the operating system, it can be terminated without adversely affecting any other process. mode, and its sub–mode virtual 8086 mode, each process is assigned a separate session, which allows for proper management of its resources. Part of that management involves creation of a separate IVT for that session, allowing the Pentium to allocate different I/O services to separate sessions. More importantly it provides protection against software crashes. Windows XP can manage multiple separate virtual 8086 sessions at the same time, possibly in parallel with execution of programs in protected mode. This idea has been extended successfully to that of a virtual machine, in which a number of programs can execute on a given machine without affecting other programs in any way. The large IBM mainframes, including the z/9 and z/10, call this idea an LPAR (Logical Partition). One key logical component of the virtual machine idea has yet to be discussed; this is called virtual memory. This will be discussed fully in chapter 12 of this textbook. There is one important point that can be restated even at this early stage. The program generates addresses that are modified by the operating system into actual addresses into physical memory. As a result, the operating system controls access to real physical memory and can use that control to mode, as well as in its sub–mode virtual 8086, addresses to physical memory are generated in a number of steps. Three terms related to this process are worth mention: the effective address, linear address, and physical address. With the exception of the term “physical address”, which references the actual address in the computer memory, the terms are somewhat contrived. In the IA–32 designs, the effective address is the address generated by the program before modification by the memory management unit. The rules for generation of this address are specified by the syntax of the assembly language. address is passed to the memory management unit, first to the segmentation which accesses the segment registers to create the linear address and then accesses a number of other MMU (Memory Management Unit) registers to determine the validity of the address value and the validity of the access: read, write, execute, etc. The translation from linear address to physical address is controlled by the virtual memory system, the topic of a later chapter. Here is another topic that we continue to mention in passing with a promise to discuss it more fully at a later time. For the moment, we shall describe the advantages of such a system, and again postpone a full discussion for another chapter. product is packaged with a cache memory system designed to optimize memory access in a system that is referencing both data memory and instruction memory at the same time. We should note that it is the general practice to keep both data and executable instructions in the same main memory, and differentiate the two only in the cache. This is one example of the common use of cache: cause the memory system to act as if it has a certain desirable attribute without having to alter the large main memory to actually have that attribute. At this time, let’s state a few facts. Because it is smaller, the Level 1 cache (L1 cache) is faster than the L2 cache. Because it is smaller than main memory, the L2 cache is faster than the main memory. This multilevel cache applies the same trick twice. In the above example, the 32 KB L1 cache combined with the 1 MB L2 cache acts as if it were a single cache memory with an access time only slightly slower than the actual L1 cache. Then the combination of cache memory and the main memory acts as if it were a single large memory (2 GB) with an access time only slightly slower than the cache memory. Now we have a memory that functionally is both large and fast, while no single element actually has both attributes. memory designs have added a write buffer, allowing for short bursts of memory writes at a rate much higher than the main memory can sustain. Suppose that the main memory has a cycle time of 80 nanoseconds, requiring a 80 nanosecond time interval between two independent writes to memory. A fast write buffer might be able to accept eight memory writes in that time span, sending each to main memory at a slower rate. We mention in passing that some multi–core Pentium designs have three levels of cache memory. Here is a picture of the Intel Core i7 die. This CPU has four cores, each with its L1 and L2 caches. In addition, there is a Level 3 cache that is shared by the four cores. This design illustrates two realities of CPU design in regards to cache memory. placement of cache memory on the CPU chip significantly increases execution speed, as on –chip accesses are faster than accesses to another chip. power management, due to the fact that memory uses less power per unit area than does the CPU logic. modern computers divide storage devices into three classes: registers, memory, external storage (such as disks and magnetic tape). In earlier times, the register set (also called the register file) was distinctly associated with the CPU, while main memory was obviously separate from the CPU. Now that designs have on–chip cache memory, the distinction between register memory and other memory is purely logical. We shall see that difference when we study a few fragments of IA–32 assembly language. One of the first steps in designing a CPU is the determination of the number and naming of the registers to be associated with the CPU. There are many general approaches, and then there is the approach seen on the Pentium. The design used in all IA–32 and some IA–64 designs is a reflection of the original Intel 8080 register set. Register set of the Intel 8080 and 8086 Intel 8080 and Intel 8086 designs date from a time when single accumulator machines were still common. As mentioned in a previous chapter, it is quite possible to design a CPU with only one general–purpose register; this is called the accumulator. The provision of seven general–purpose registers in the Intel 8080 design was a step up from existing practice. We have already discussed the evolution of the register set design in the evolution of the line. The Intel 8080 had 8–bit registers; the Intel 8086, 80186, and 80286 each has 16–bit registers, and the IA–32 line (beginning with the Intel 80386) all have 32–bit registers. The Intel 8080 set the trend; newer models might have additional registers, but each one had to have the original register set in some fashion. Register set of the Intel 80386 The Intel 80386 was the first member of the IA–32 design line. It is a convenient example for purposes of discussion. In fact, it is common practice for introductory courses in Pentium assembly language to focus almost exclusively on the Intel 80386 Instruction Set Architecture (register set and assembly language instructions), and to treat the full Pentium ISA as an extension. Here is a figure showing the Intel 80386 register set. This is the general–purpose register used for arithmetic and logical from the previous chapter that parts of this register can be separately accessed. This division is seen also in the EBX, ECX, and EDX registers; the code can reference BX, BH, CX, CL, etc. has an implied role in both multiplication and division. In addition, the A register (AL in the Intel 80386 usage) is involved in all data transfers to and from the I/O ports. Here are some examples of IA–32 assembly language involving the EAX register. Note that the assembly language syntax denotes hexadecimal numbers by appending an “H”. MOV EAX, 1234H ; Set the value of EAX to hexadecimal 1234 ; The format is destination, source. CMP AL, ‘Q’ ; Compare the value in AL (the low order 8 ; bits of EAX to 81, the ASCII code for ‘Q’ MOV ZZ, EAX ; Copy the value in EAX to memory location ZZ ; Divide the 32-bit value in EAX by the ; 16-bit value in DX. Here is an example showing the use of the AX register (AH and AL) in character input. MOV AH, 1 ; Set AH to 1 to indicate the desired I/O ; function – read a character from standard input. INT 21H ; Software interrupt to invoke an ; function, here the value 21H (33 in decimal) ; indicates a standard I/O call. MOV XX, AL ; On return from the function call, register ; contains the ASCII code for a single character. ; Store this in memory location XX. This can be used as a general–purpose register, but was originally designed to be the base register, holding the address of the base of a data structure. The easiest example of such a data structure is a singly dimensioned array. LEA EBX, ARR ; The LEA instruction loads the address ; associated with a label and not the value ; stored at that location. MOV AX, [EBX] ; Using EBX as a memory pointer, get the 16-bit ; value at that address and load it into AX. ADD EAX, EBX ; Add the 32-bit value in EBX to that in EAX. This can be used as a general–purpose register, but it is often used in its special role as a counter register for loops or bit shifting operations. This code fragment illustrates its use. MOV EAX, 0 ; Clear the accumulator EAX MOV ECX, 100 ; Set the count to 100 for 100 repetitions TOP: ADD EAX, ECX ; Add the count value to EAX LOOP TOP ; Decrement ECX, test for zero, and jump ; back to TOP if non-zero. At the end of this loop, EAX contains the value 5,050. This can be used as a general–purpose register, but it can also support input and output data transfers. It also plays a special part in executing integer multiplication and division. In general, the product of two 8–bit integers is a 16–bit integer, the product of two 16–bit integers is a 32–bit integer, and the product of two 32–bit integers is a 64–bit integer. Remember that register AL is the 8 low–order bits of EAX, and AX is the 16 low–order bits. One item that is important to note is that the EAX register, or whatever part is used in the MUL operation, is implicitly a part of the operation, without being called out explicitly. MOV AL, 5H ; Move decimal 5 to AL MOV BL, 10H ; Decimal 16 to BL MUL BL ; AX gets the 16–bit number 0050H (80 decimal) ; The instruction says multiply the value in ; AL by that in BL and put the product in AX. ; Only BL is explicitly mentioned. multiplications use AX as a 16–bit register. For compatibility with the Intel 8086, the full 32 bits of EAX are not used to hold the product. Rather the two 16–bit registers AX and DX are viewed as forming a 32–bit pair and serve to store it. Again, note that the 16–bit version of the MUL automatically takes AX as holding one of the integers to be multiplied. MOV AX, 6000H ; MOV BX, 4000H ; MUL BX ; DX:AX = 1800 0000H. implementation of multiplication uses EAX to hold one of the integers to be multiplied and uses the register pair EDX:EAX to hold the product. Here is an example. MOV EAX, 12345H MOV EBX, 10000H MUL EBX ; Form the product EAX times EBX ; EDX:EAX = 0000 0001 2345 0000H Register DX can also hold the 16–bit port number of an I/O port. MOV DX, 0200H IN AL, DX ; Get a byte from the port at address 200H. The ESI and EDI registers are used as source and destination addresses for string and array operations. These are sometimes called “Extended Source Index” and “Extended Destination Index”. They facilitate high–speed memory transfers. The EBP register is used to support the call stack for high level language procedure calls. We shall discuss this more in the next chapter, in which we discuss subroutines. Briefly put, it functions much like a stack pointer, but does not point to the top of the stack. The next two registers, EIP and ESP, are 32–bit versions of the older 16–bit counterparts. We discuss these here, and then introduce the 16–bit variants by discussing segments again. The EIP is the 32–bit Instruction Pointer, so called because it points to the instruction be executed next. Many other architectures call this register by the more traditional, if less appropriate, name “Program Counter”. Jump and branch instructions, unconditional or conditional (if the condition is true), achieve their affect by forcing a target address into the EIP. The ESP is the 32–bit Stack Pointer, used to hold the address of the top of the stack. This register is not commonly accessed directly except as a part of a procedure call. We must make the point here that the stack is not always treated as an ADT (Abstract Data Type) with PUSH as the only way to place an item on the stack. We shall investigate direct manipulation of the ESP in more detail when we discuss allocation of dynamic memory for local variables. The EFLAGS register holds a collection of at most 32 Boolean flags with various meanings. The flags are divided into two broad categories: control flags and status flags. Control flags can cause the CPU to break after every instruction (good for debugging), interrupt execution on detecting arithmetic overflow, enter protected mode, or enter virtual 8086 mode. The status flags reflect the state of the execution and include CF (the carry flag, indicating a carry out of the last arithmetic operation), OF (the overflow flag, indicating that the result is too large or too small to be represented), SF (the sign flag, indicating that the last result was negative), ZF (the zero flag, indicating that the last result was zero), and several more. There are six 16–bit segment registers (CS, SS, DS, ES, FS, and GS), which are hold overs from the 16–bit Intel 8086. As discussed in the previous chapter, these are used to allow generation of 20–bit addresses from 16–bit registers. The two standard register pairings are CS:IP (Code Segment and Instruction Pointer) and SS:SP (Stack Segment and Stack Pointer). In the more modern Pentium usage, these segment registers are used in combination with descriptor registers to support memory management. Register set of the Pentium In addition to the above register set, the Pentium architecture calls for six 64–bit registers support memory management (CSDCR, SSDCR, DSDCR, ESDCR, FSDCR, and GSDCR), the TR (Task Register), the IDTR (Interrupt Descriptor Table Register), two descriptor registers (GDTR – Global Descriptor Task Register and LDTR – Local Descriptor Task Register) and a few more. Then there are the sixteen specialized data registers (MM0 – MM7 for the multimedia instructions, and FP0 – FP7 for floating point arithmetic). Newer versions of the architecture almost certainly contain still more registers. the case of memory management, it is important to remember that the Operating System functions by setting up and then using some fairly elaborate data structures. Each of these structures has a base address stored in one of these registers for fast access. We now discuss some of the addressing modes used in the Pentium architecture. We shall use two–argument instructions to illustrate this, as that is easier. The simplest mode is also the fastest to execute. This is the data register direct mode. Here is an example. MOV EAX, EBX ; Copy the value from EBX into EAX ; The value in EBX is not changed. In this mode, one of the arguments is the value to be used. Here are some examples, a few of which are not valid. MOV EBX, 1234H ; EBX gets the value 01234H. MOV 123H, EBX ; NOT VALID. The destination of any ; move must be a memory location. MOV AL, 1234H ; NOT VALID. Only one byte can be moved ; into an 8-bit register. This is 2 bytes. Memory Direct Mode In this mode, one of the arguments is a memory location. Here are some examples. MOV ECX, [1234H] ; Move the value at address 1234H to ECX. ; Not the same as the above example. MOV EDX, WORD1 ; Move the contents of address WORD1 to EDX MOV WORD2, EDX ; Move the contents of the 32–bit register ; EDX to memory location WORD2. MOV X, Y ; NOT VALID. Memory to memory moves are ; not allowed in this architecture. Address Register Direct address associated with a label is loaded into a register. Here are two examples, one of which is memory direct and one of which is address register direct. LEA EBX, VAR1 ; Load the address associated with VAR1 ; into register EBX. ; This is address register direct. MOV EBX, VAR1 ; Load the value at address VAR1 into EBX. ; This is memory direct addressing. Here the register contains the address of the argument. Here are some examples. MOV EAX, [EBX] ; EBX contains the address of a value ; to be moved to EAX. Note that the following two code fragments do the same thing to EAX. Only the first fragment changes the value in EBX. LEA EBX, VAR1 ; Load the address VAR1 into EBX MOV EAX, [EBX] ; Load the value at that address into EAX MOV EAX, VAR1 ; Load the value at address VAR1 into EAX Direct Offset Addressing Suppose an array of 16–bit entries at address AR16. We may employ direct offset in two ways to access members of the array. Here are a number of examples. MOV CX,AR16+2 ; Load the 16–bit value at address ; AR16 + 2 into CX. For a zero-based ; array, this might be AR16. MOV CX,AR16 ; Does the same thing. Computes the ; address (AR16 + 2). Base Index Addressing This mode combines a base register with an index register to form an address. MOV EAX, [EBP+ESI] ; Add the contents of ESI to that of EBP ; to form the source address. Move the ; 32–bit value at that address to EAX. Index Register with Displacement There are two equivalent versions of this, due to the way the assembler interprets the second way. Each uses an address, here TABLE, as a base address. [TABLE+EBP+ESI] ; Add the contents of ESI to that ; of EBP to form an offset, then add ; that to the address associated ; with the label TABLE to get the ; address of the source. MOV EAX TABLE[ESI] ; Interpreted as the same as above.
<urn:uuid:8ef7cd5c-6466-4b82-a3ff-126fab5c42b3>
CC-MAIN-2017-04
http://edwardbosworth.com/CPSC2105/MyTextbook2105_HTM/MyText2105_Ch07C_V06.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89724
5,617
3.96875
4
Globalization of the knowledge economy has surprised many enterprises. The speed and impact of offshore outsourcing, and the poor global economy, are driving change in the workforce. Table of Contents Historically, companies in the United States, Europe and Japan have led globalization, because those countries pushed products and services into developing countries. As the business of offshore sourcing grows, globalization is beginning to become widely accepted elsewhere. With "nearshore" and offshore sourcing, the global equation has changed. Enterprises in developing countries and emerging markets are now reaching into developed economies, offering a talented workforce at a fraction of the price. Developed and developing economies are exploiting each other's markets, economies and labor forces. It is natural to expect that those disadvantaged by globalization — irrespective of market — will protest and make known their issues. Likewise, local politicians and political parties may try to protect jobs and obtain votes through legislation such as the bills currently being debated in four U.S. states aimed at blocking the outsourcing of government work to offshore enterprises. Moreover, unlike previous instances of globalization — in textiles, products and manufacturing — the latest round is occurring almost instantaneously over a vast and sophisticated communication network. This has enabled business, projects, tasks and jobs to be transferred to virtual workforces across the globe quickly and transparently — a trend that is occurring so rapidly as to disorient entire professions, societies and organizations. Changing Nature of Technical Work Another factor making outsourcing attractive is the changing nature of technical work. By 2006, service-oriented architecture (SOA) will be at least partially adopted in more than 60 percent of new, large and systematically oriented application development projects (0.7 probability). The proliferation of Web services and SOA is causing software to be developed in smaller units that are easier to map to business processes. These smaller units are also ideal for an offshore environment. Larger projects are harder to manage and even more difficult in an offshore model. Smaller projects that use service-oriented development of applications (SODA) are easier to manage, are lower risk and will deliver better value over a shorter time frame as business begins to make the move to the real-time enterprise. With this move to SODA, technologists and business people are talking, working with and understanding processes better. Communication between all parties is in terms of processes and subprocesses, more accurately mapping business needs. Through 2006, service-oriented development will change the way software is built, packaged and sold by more than 80 percent of independent software vendors (0.7 probability). Quite simply, it is becoming easier to outsource that ever before. Collective Activism Levels the Playing Field As businesses collaborate, and as top-down control of work and employees weakens, regional labor markets will normalize. Workers in one area of the globe will hear about practices in other parts of the world, raising awareness and intensifying their demands for equity. Labor forces in relatively disadvantaged economies will lobby to bring workforce programs into alignment with those of their global peers. Meanwhile, the values of workers and consumers in wealthier regions will promulgate globally, creating pressure across markets to adopt safe and competitive labor practices. In the long term — 10 years or more — the continuous pressure for equitable practices will normalize work/life programs and start to narrow the gap among regional labor rates. The gap in rates between the Western economies and Southeast Asia will remain big, but the gaps between emerging-market countries in Southeast Asia or other global regions will narrow. As global competition intensifies, emerging-market businesses will compete aggressively for top talent and seek to wring even more effort out of every person, yielding a degree of parity in compensation and improvements in workplace conditions, but a decline in the work/life balance. Without a significant upturn in IT investment in Europe and North America, the movement of work overseas will lead to job cuts and layoffs in IT, starting first with IT vendors and IT service providers, and moving steadily into user companies. For example, without an infusion of innovation that stems the outflow of IT work offshore, positions in the United States will quickly and steadily get filled by enterprises in emerging markets. Geographic and international centers of competency will emerge, shift and evolve. Consider this evolution: The United States' competency in Internet commercialization has been superseded by Southeast Asia's process-heavy competence in programming and development. The latter competency will be superseded by other regions' competencies in, for example, biotechnology, integrated consumer-business services or life sciences. For now, enterprises that are lured by low-cost labor markets will make decisions that satisfy immediate budget requirements, but many know little about domestic outsourcing, and even less about offshore outsourcing. They will likely face problems. As for their employees, the backlash will be real. Faced with enterprisewide displacement, soon-to-be-former employees will enter states of intellectual paralysis and productivity loss. Stress and uncertainty will increase. According to a 22 July 2003 article in the New York Times, IBM is now acknowledging the apparent necessity of moving service work to low-cost regions, and it is anticipating anger from displaced employees, as well as potential unionization for worker protection. Nevertheless, the displacement of jobs will not deter businesses from moving work to other markets: At Gartner's Outsourcing Summit in Los Angeles in June 2003, 80 percent of respondents conceded that the potential backlash would have no effect on their decisions to move forward with offshore outsourcing. Through 2006, labor unrest will be a significant "wild card” in the offshore outsourcing landscape, with fewer than 10 percent of executives adequately anticipating its disruptive impact on operations (0.8 probability). Vendor-employed software developers may turn to unionization or guilds that can bargain for them as a collective. At the same time, the fairly clinical way in which people are being excised from company payrolls may spark the first class-action suit about unlawful termination. Although there is frequent talk of "sweatshops" in many developing countries, the reality is often far different. In terms of economies of scale, domestic spending power and quality of life, many people in developing nations are compensated exceptionally well. As enterprises globalize, employers worldwide will be forced to offer more-competitive salaries and packages to their employees, especially those who are based abroad. Employees will be compensated based on skills, roles and merits. Already we have seen enterprises in one emerging market use comparatively high salaries to lure top talent from other emerging markets. Employers that fail to compensate accordingly will find their top employees moving elsewhere — the opportunities are bountiful, especially for companies in developing nations that are "playing catch-up" to India. Indian companies especially need to refine their hiring models to become fully engaged local citizens in local markets, rather than all-Indian companies that employ predominantly Indian staffs. The hiring of locals in each market will not only lessen employee backlash, but it will also offer many benefits that only locals can provide, such as a deep understanding of local markets and mindsets. "Counter-Revolutionary Strategies for the Offshore Revolution" — For now, service providers should absorb and help shape the offshore revolution, and long term, they should move to annul it. By Rolf Jester "Reuters Insources Software Development Offshore" — Through refined processes and continuous training, Reuters has moved a large part of its software development from the United States and Europe to a more-efficient and cost-effective center in Thailand. By Dion Wiggins "Deciphering the Backlash Against Indian Service Providers" — Offshore providers should not overreact to the negative press and sentiments around the growing global backlash, which is an inevitable sign of a maturing paradigm or trend. By Partha Iyengar "U.S. Offshore Outsourcing: Structural Changes, Big Impact" — As IT work moves overseas, the dislocation of IT jobs is real. By Diane Morello "Offshore Outsourcing Can Benefit Europe in the Long Term" — Jobs being lost to overseas suppliers is not a new phenomenon for Europe, but the challenge is for governments and unions to see the long-term benefits when their voters and members are confronted with the prospect of unemployment. By Ian Marriott "Offshore Insourcing vs. Offshore Outsourcing" — Enterprises may benefit from offshore delivery without turning to outsourcing. By Rebecca Scholl and Sujay Chohan "The Impact on People When Going Offshore for IT Services" — The worldwide IT services market is experiencing one of the biggest changes in its history — the paradigm shift to offshore outsourcing. By Frances Karamouzis
<urn:uuid:6fd4ede2-0461-45a5-8fd1-cd23657a5069>
CC-MAIN-2017-04
https://www.gartner.com/doc/405776
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943723
1,753
2.546875
3
Tit for Tat - Do Good & Have Good It is a law of nature that whatever action we take in this world, there is always a reaction. If we do well, we stand to gain a good reward. If we do badly, we should expect a bad outcome ultimately. "As you sow, so shall you reap" is a popular saying. The Holy Qur'an has also guided on this subject, it says: "If you do good, you do good to yourselves. Likewise, if you do evil, you do evil to yourselves." (17:7) One of the companions of the Prophet Muhammad was very fond of this verse of the Qur'an. He used to recite it loudly and repeatedly wherever he went. A woman who had heard him once wanted to prove him wrong and thus making him unpopular among his people. She thought up a plot against him. She prepared some sweets mixed with poison and sent them to him as a present. When he received them, he went out of the city taking sweets with him. On the way, he met two men who were returning home from a long journey. They appeared tired and hungry, so he thought of doing them a good turn. He offered them the sweets. Of course, he was not aware that they were secretly mixed with poison. No sooner had the two travelers taken the sweets, they collapsed and died. When the news of their death reached Medina, the city where the Prophet Muhammad resided, the man was arrested. He was brought in front of the Prophet Muhammad and he related what had actually happened. The woman, who had mixed poison with the sweets, was also brought to the court of the Prophet Muhammad. She was stunned to see the two dead bodies of the travelers there. They in fact turned out to be her two sons who had gone away on a journey. She admitted her evil intention before the Prophet Muhammad and all the people present. Alas, the poison she had mixed in the sweets to kill the companion of the Prophet Muhammad had instead killed her own two sons. Lesson to learn from this story - What a splendid example of a tragic reaction to a bad action; it shows how one reaps what he sows. "Do as you would be done by" are the words of wisdom from the learned and wise men of the past. They teach us to do good to others in the same way as we like others to do good to us. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:9da2aabc-27d1-4167-b996-f30272a400f1>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-208.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00330-ip-10-171-10-70.ec2.internal.warc.gz
en
0.98667
593
2.59375
3
Following the publication of a report last year about lessons learned from social media’s use during Hurricane Sandy, the U.S. Department of Homeland Security released a new document on July 1 to address how social platforms can and are being used for situational awareness. Developed by the DHS Science and Technology Directorate’s Virtual Social Media Working Group, the report “addresses various challenges associated with the use of social media for situational awareness, the integration of social media within the operational environment, and identifies areas requiring further consideration, research and development,” according to FirstResponder.gov. Called Using Social Media for Enhanced Situational Awareness and Decision Support (PDF), the report says that while situational awareness is not a new concept for emergency managers, it is a focus point for response and recovery efforts — and social media provides additional channels through which information can be shared and requested. “If integrated with traditional data, social media can help emergency responders achieve and maintain situational awareness in real time. This will assist with decision-making, planning and resource allocation,” says the report. While numerous emergency managers and agencies have embraced social media, identifying best practices and lessons learned helps advance the use of the platforms and how information is both collected and disseminated. Real-world examples in the document include Boston’s use of Twitter following the bombing at the 2013 marathon. The Boston Police Department tweets in effect became the official source of information for everyone, including the media, especially after numerous reports by the press turned out to be false, Emergency Management reported. “We realized that we were fortunate that we already had the infrastructure set up, so we already had a Twitter account, a blog and a Facebook page,” said Cheryl Fiandaca, who at the time was the bureau chief of public information for the Boston Police Department. “If we hadn’t had that in place and hadn’t been using it in a substantial way, I think we would have been at a terrible loss during that time.” The Clark Regional Emergency Services Agency in Washington is another example that’s highlighted for its use of social media monitoring on an ongoing basis and during emergencies. The agency uses the free tool TweetDeck to monitor Twitter lists, which are categorized into channels including national news media, local news media, public safety, community members and professional contacts. Emergency Manager Cheryl Bledsoe and her staff members maintain awareness of the tweets within each group. Most staff members in the agency run two computer screens: one for daily work and one for watching tweets that are filtered according to relevant keywords and hashtags. “Anytime something begins to trend or starts getting popular, you’ll see the TweetDeck screen start moving a little bit faster and that will catch our eye. It may be an earthquake or maybe a celebrity has died,” Bledsoe said last year. As increasingly more agencies and organizations seek to add social media monitoring into their operations, the report outlines seven key factors relating to information and data requirements to effectively use the platforms: The report concludes with a series of questions related to developing best practices to encourage information sharing (e.g., Can the definition of personally identifiable social media information be standardized across government agencies?) and issues that require further consideration. But Emergency Management blogger Gerald Baron doesn’t want the list of questions to discourage emergency managers and agencies from using social media platforms: “You want to get started? Just get on social media and start discovering for yourself what amazing things can be found. This story was originally published by Emergency Management.
<urn:uuid:f5dd7b1e-1935-4bb4-9c8c-3c6ea76f9be8>
CC-MAIN-2017-04
http://www.govtech.com/internet/7-Factors-for-Effectively-Adding-Social-Media-Monitoring-into-Operations.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943004
737
2.75
3
Researchers have turned a display annoyance into a way to show two different images simultaneously. When an LCD is tilted, colors change and become difficult to see, but with Dual View from Microsoft Research Asia different images and video can be shown. "We're actually exploiting this property by using a special algorithm to render the image in a special way so that we can hide or show different images at different angles," said Xiang Cao, a researcher with Microsoft Research Asia. "Basically making a bug into a feature." To see the Dual View display watch a video on YouTube. In one example, Xiang held a laptop tablet screen, which displayed a game of cards, horizontally. On one side of the screen, a player could see his own cards, but not his opponents'; the other side showed only the opponents' cards. It's not perfect, because it's limited to the optical properties of the display. "You may lose a little bit of contrast or saturation and there are certain angles that work better than others," Xiang said. There are a variety of uses for the technology, from privacy to gaming to even potential 3D applications. There are no immediate plans for commercialization.
<urn:uuid:2a5d0a11-dcee-463f-8e09-0a9ca135b30f>
CC-MAIN-2017-04
http://www.cio.com/article/2396288/hardware/exploited-display-bug-lets-lcds-show-two-images-simultaneously.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943738
239
2.859375
3
Commercial grade green and red laser pointers emit energy far beyond what is safe, posing skin, eye and fire hazards. That was the conclusion of a National Institute of Standards and Technology (NIST) study on the properties of handheld laser devices that tested 122 of the devices and found that nearly 90% of green pointers and about 44% of red pointers tested were out of federal safety regulation compliance. [Something different: The fantastic world of steampunk technology] "Handheld lasers (laser pointers) have been around for decades. However recent advances in laser technology have had a dramatic impact, enabling low-cost, high power laser pointers at a variety of visible wavelengths. These powerful lasers have found their way into society in large numbers and are being operated by people who may be unfamiliar with their potential for eye injury, resulting in increased reports of retinal injuries," stated NIST researchers Joshua Hadler and Marla Dowell in a paper on laser safety they presented this week at the International Laser Safety Conference. Green lasers generate green light from infrared light. Ideally, the device should be designed and manufactured to confine the infrared light within the laser housing. However, according to the new NIST results, more than 75 percent of the devices tested emitted infrared light in excess of the CFR limit, NIST stated. "The NIST tests were conducted on randomly selected commercial laser devices labeled as Class IIIa or 3R and sold as suitable for demonstration use in classrooms and other public spaces. Such lasers are limited under the Code of Federal Regulations (CFR) to 5 milliwatts maximum emission in the visible portion of the spectrum and less than 2 milliwatts in the infrared portion of the spectrum. About half the devices tested emitted power levels at least twice the CFR limit at one or more wavelengths. The highest measured power output was 66.5 milliwatts, more than 10 times the legal limit. The power measurements were accurate to within 5%," the NIST researchers stated. Laser devices that exceed 3R limits may be hazardous and should be subject to more rigorous controls such as training, to prevent injury, according to the American National Standards Institute (ANSI). Green lasers in particular have gotten a bad reputation for being used by chuckleheads who think it is fun to point them at low flying aircraft. The Federal Aviation Administration (FAA) last May said the number of reported laser incidents nationwide had risen for the fifth consecutive year to 3,592 in 2011. Pointing a laser at an aircraft can cause temporary blindness or make airliner pilots take evasive measures to avoid the laser light. The FAA has begun to impose civil penalties against individuals who point a laser device at an aircraft. The maximum penalty for one laser strike is $11,000, and the FAA has proposed civil penalties against individuals for multiple laser incidents, with $30,800 the highest penalty proposed to date. In many of these cases, pilots have reported temporary blindness or had to take evasive measures to avoid the intense laser light. The FAA says the increase in annual laser reports is likely due to a number of factors, including the availability of inexpensive laser devices on the Internet; increased power levels that enable lasers to reach aircraft at higher altitudes; more pilot reporting of laser strikes; and the introduction of green and blue lasers, which are more easily seen than red lasers. The FBI has said: "Those responsible for lasering aircraft fit two general profiles. Consistently, it's either minors with no criminal history or older men with criminal records. The teens are usually curious or fall victim to peer pressure. The older men simply have a reckless disregard for the safety of others. There are also intentional acts of laser pointing by human traffickers or drug runners seeking to thwart airborne surveillance. Check out these other hot stories:
<urn:uuid:16d64585-5fc0-4291-8fa8-ad345be772de>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224325/security/laser-pointers-produce-too-much-energy--pose-risks-for-the-careless.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957536
771
2.875
3
Four Principals of IT Accessibility Four principals of IT accessibility The WCAG also include a list of checkpoints-the first step in compliance verification. According to the WCAG, four principals form the foundation of IT accessibility: Principal No. 1: Perceivable User interfaces and any information contained within them must be easily viewable. There also should be alternative ways to read text and access video content (that is, closed captioning). All content must be distinguishable. Principal No. 2: Operable Users must be able to navigate Websites and applications via a keyboard and a mouse, and they should be provided with tools or assistive technology shortcuts to determine basic navigation. Developers cannot enforce time limits on Websites and applications unless there are reasonable security concerns that justify such constraints. Principal No. 3: Understandable Text should be readable and understandable, Web pages should be predictable and users should have access to input assistance that allows them to correct mistakes. Principal No. 4: Robust Content cannot conflict with assistive technologies and it must be robust enough that those technologies can reliably interpret it. For accessibility purposes, all content must provide role names and descriptions and use well-formed markup language.
<urn:uuid:c5553a13-7156-4def-a8a0-413cb65bd482>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/How-to-Ensure-IT-Accessibility-in-Applications-and-Websites/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.880383
252
2.796875
3
MongoDB databases have suffered a surge of ransomware attacks, with over 27,000 servers currently compromised as hackers steal and delete data from unpatched or poorly-configured systems. Used for analytics and data study, MongoDB is a famous open-source NoSQL database. In popular rankings, it comes after giants like Oracle, MySQL and Microsoft SQL Server. According to ethical hacker Victor Gevers, one-fourth of 99 thousand MongoDB instances which are open to the internet have been attacked. It has been said that ransom criminals target mainly those accounts which do not have password protected admin accounts. Hackers use automated scanning tools searching the web for signs of insecure or improperly configured MongoDB systems, he added. Currently, the situation is really bad for MongoDB owners and there is no sign of hope. The worse part is many groups are hacking the same servers again and again, and exchanging notes on ransom which makes are almost impossible to track victim’s data. And this results in paying ransom to other people. Hackers use ransomware to attack computers specifically of organisations and then encrypt delicate and important data, before asking for a ransom to give the data back. Small businesses to big enterprises, no one without proper resources is safe for such threats. Ransomware is used for encryption of valuable files and it is impossible for companies to get them back, and has to give in to the ransom demands.
<urn:uuid:a53106b8-25e6-459e-a65c-8bc5d9e23dbe>
CC-MAIN-2017-04
https://latesthackingnews.com/2017/01/09/mongodb-database-hit-ransomware-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947901
287
2.6875
3
A device designed by engineers at the Georgia Tech Research Institute (GTRI) is part of the Hurricane Imaging Radiometer (HIRAD), an experimental airborne system developed by the Earth Science Office at the NASA Marshall Space Flight Center in Alabama. Known as an analog beam-former, the GTRI device is part of the radiometer, which is being tested by NASA on a Global Hawk unmanned aerial vehicle. The radiometer measures microwave radiation emitted by the sea foam that is produced when high winds blow across ocean waves. By measuring the electromagnetic radiation, scientists can remotely assess surface wind speeds at multiple locations within the hurricanes. HIRAD could provide detailed information about the wind speeds and rain intensity inside hurricanes without the need to fly manned aircraft through the storms. In addition to the beam-former design, GTRI researchers also provided assistance to NASA with improvements aimed at a potential future, more advanced version of the radiometer. "Improved knowledge of the wind speed field will enable the National Hurricane Center to better characterize the storm's intensity," explained Timothy Miller, Research and Analysis Team Lead for the Earth Science Office at the NASA Marshall Space Flight Center. "Better forecasts of storm intensity and structure will enable better warnings of such important factors as wind strength and storm surge. That would allow businesses and residents to prepare with more confidence in their knowledge of what is coming." Glenn Hopkins, a GTRI research engineer, displays examples of the beam-formers designed by GTRI for use on the hurricane imaging radiometer now being tested by NASA. Photo courtesy of Georgia Tech/Gary Meek. HIRAD was flown above two hurricanes in 2010 and a Pacific frontal system in 2012. Data it gathered on wind and rain will be provided to the scientific community for use in numerical modeling, and could also guide development of a next-generation system that would provide information on wind direction in addition to measuring wind speed and rain intensity. "We have verified the instrument concept in terms of sensitivity to wind speed and rain rate," Miller said. "We have also learned a lot about the factors that need to be considered in developing calibrated images from the flight data. That work is still ongoing." GTRI researchers supported development of the radiometer with design of the beam-formers, which are part of the radiometer's array antenna. The array antenna gathers microwave signals from the ocean and the GTRI-designed devices – several of which are required – form "fan" beams of electromagnetic energy across the ground path of the aircraft's travel. The resulting signals are then fed into sensitive receivers developed by researchers at the University of Michigan and ProSensing Inc., a Massachusetts company. "There are different ways to build antennas to solve this problem, but array antennas provide multi-channel capability and greater sensitivity," said Glenn Hopkins, a research engineer who headed up the GTRI design work. "Because this system is passive – it doesn't send out radiation – we need to have maximum sensitivity and a focus on minimizing noise in the system." The HIRAD system, also known technically as a microwave synthetic aperture radiometer, is designed to operate in the microwave spectrum, from about 4 gigahertz to 7 gigahertz. Discrete parts of that range are used to enable discrimination between ocean surface emission and that from the rain located between the instrument and the surface. "On the aircraft, the instrument would be flying a track over the storm, with a multitude of simultaneous beams," explained Hopkins. "We would be pixelating the surface and could determine what radiation is coming from each area to generate a map of the intensity of the wind speeds as we fly over the storm." Beyond supporting the radiometer's need for high sensitivity and low noise, the component also had to be as small and light as possible to be part of the Global Hawk payload. The GTRI design was manufactured by an outside company, and integrated directly onto the back of the instrument's antenna. The circuitry is just 20 one-thousandths of an inch thick, printed on flexible circuit materials. "This project is an example of the kinds of work we have been doing for the Department of Defense, and we're pleased that this technology can be transitioned to assist with weather prediction and research," Hopkins said. As part of a small business innovation research (SBIR) project with Spectral Research Inc., GTRI researchers also participated in an effort to increase the capability of the HIRAD array by designing a dual polarized array to replace the single polarized array that is part of the existing test system. The dual polarized array operates at the same 4 to 7 gigahertz range as the single polarized array, but provides both polarization channels in the same area. The dual polarized design exploited fragmented antenna technology developed at GTRI to support this broad range of frequencies. "One key challenge in the array study was to use the same footprint as the single polarization array," said Jim Maloney, a GTRI principal research engineer. "Prototype dual polarization arrays were built and measured to confirm the ability of GTRI's fragmented antenna technology to meet the bandwidth and form factor requirements." The Global Hawk can fly at altitudes of more than 60,000 feet, and can stay in the air for as long as 31 hours, allowing it to remain in the hurricane area as much as four times longer than piloted aircraft now used for monitoring hurricanes. It provides data that is more detailed than what satellites could provide. "A UAV is able to stay over the storm for much longer," Miller noted. "Compared to a satellite, the UAV observations are of much higher spatial resolution, and depending on the satellite's orbit, generally of a much longer time period. A satellite instrument would be able to observe storms continually, over a much larger area, but would provide much coarser spatial resolution." Development of HIRAD was supported by NASA and the National Oceanic and Atmospheric Administration (NOAA). The project involved partnerships among NASA's Marshall Space Flight Center, NOAA's Unmanned Aerial Systems Program, the University of Michigan, the University of Central Florida and NOAA's Hurricane Research Division.
<urn:uuid:a055f555-be3b-438c-a822-37bb1d0864ff>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Airborne-System-Measure-Hurricane-Intensity.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945947
1,258
3.421875
3
SPF: What It Is and What It’s Not Discover the benefits, limitations, and functionality of SPF What Is SPF? SPF, or Sender Policy Framework, is an email authentication protocol you can use to authenticate your email. Email receivers who validate the authenticity of messages will query the DNS records associated with your sending domain to obtain a list of IP addresses you have explicitly authorized as valid sending systems. SPF is in widespread use, and the standard is managed by the IETF (RFC 7208). The Protection of SPF When email is sent from an IP that is not listed in your SPF record by someone who is not authorized to send on your domain’s behalf, SPF allows the receiver to reject it. Your customer doesn’t receive the email and your reputation and brand stays intact. Limitations of SPF SPF alone is not a complete solution to email authentication. There are a few elements of the equation missing even after an email sender has fully deployed SPF. - There is no way for a recipient system to know how much reliance they should put on the SPF results for any given email. - SPF provides no way for email receivers to provide any feedback to the email senders. - SPF authenticates email domains that are buried deep in the message headers and not easily visible to a typical end user. The Solution: DMARC Cutting-Edge Email Authentication The limitations of protocols like SPF lead to the development of a complete email authentication solutions — DMARC. The DMARC standard is an overlay that adds three key elements of feedback, policy, and identity alignment to the already deployed SPF and DKIM framework. With DMARC, you always know that the recipient your original email, and it doesn’t require behavioral adjustments from the user.
<urn:uuid:2eaf1c47-e960-4fce-8ba2-3f6c9fe49b99>
CC-MAIN-2017-04
https://www.agari.com/spf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917993
381
3.1875
3
Definition: A set of data values and associated operations that are precisely specified independent of any particular implementation. Also known as ADT. Specialization (... is a kind of me.) dictionary, stack, queue, priority queue, set, bag. See also data structure. Note: Since the data values and operations are defined with mathematical precision, rather than as an implementation in a computer language, we may reason about effects of the operations, relations to other abstract data types, whether a program implements the data type, etc. One of the simplest abstract data types is the stack. The operations new(), push(v, S), top(S), and popOff(S) may be defined with axiomatic semantics as following. From these axioms, one may define equality between stacks, define a pop function which returns the top value in a non-empty stack, etc. For instance, the predicate isEmpty(S) may be added and defined with the following additional axioms. After Nell Dale <firstname.lastname@example.org> May 2001. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 10 February 2005. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "abstract data type", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 10 February 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/abstractDataType.html
<urn:uuid:5a149d03-4fb6-44ad-970c-781e7dbf97a8>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/abstractDataType.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.86018
346
2.8125
3
Local area network (LAN) is being widely deployed in the enterprise, university, hospital, army, hotel and places where a group of computers or other devices share the same communication link to a More... FDDI – Fiber Distributed Data Interface WAN – Wide Area Network MAN – Metropolitan Area Network LAN (Local Area Network) is a computer network within a small area, such as a home, school, computer More...
<urn:uuid:8452b336-c06c-4b4c-9cfc-d9c61c48201a>
CC-MAIN-2017-04
http://www.fs.com/blog/tag/local-area-network
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901374
89
2.828125
3
MIT architects and engineers have taken the guesswork out of public transit for commuters in Florence, Italy, by creating a futuristic bus stop called EyeStop, which lets users plan bus trips on an interactive map, surf the Web, monitor their real-time exposure to pollutants and use their mobile devices as an interface with the bus shelter. Users can post ads and community announcements on its electronic bulletin board. EyeStop also powers itself through sunlight and collects real-time information about the surrounding environment. -- Massachusetts Institute of Technology San Francisco Tweet San Francisco Mayor Gavin Newsom announced in June that residents can make 311 customer service requests or complaints to the city's call center using Twitter. Residents can tweet to the city's call center about various services like graffiti removal, potholes, garbage maintenance and street cleaning. Mobile users also can send messages and pictures using a third-party application. To make service requests, residents who have Twitter accounts must follow San Francisco's account at SF311. Once a service request is submitted, it's logged by a call taker into a customer relationship management database. The resident is then given a tracking number so he or she can follow up on the issue. Users can send direct messages to the call center by appending the letter "D" before SF311, which allows them to receive real-time responses. However, users with general inquiries usually receive a link to the information they need.
<urn:uuid:29f0c079-b8fc-47ae-8c3c-eac46e62c807>
CC-MAIN-2017-04
http://www.govtech.com/e-government/San-Francisco-Residents-Can.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929103
286
2.5625
3
The VoIP Peering Puzzle�Part 6: ENUM Standards and Operation In yesterday's tutorial, we took a high-level view of one of the key technical issuesaddress translationthat must be resolved in order for end-to-end VoIP services to become widespread and readily available. That translation is required because telephone numbers adhere to one addressing standard, known as E.164, (the International Public Telecommunication Numbering Plan, developed by the ITU-Tsee www.itu.int); and Internet-connected workstations use Internet Protocol addresses, developed by the Internet Engineering Task Force (IETF) as part of their protocol specifications for IP version 4 (IPv4) and IP version 6 (IPv6). In addition to our examination of these addressing details, we also looked at two other constructs: the Uniform Resource Identifier, or URI, which is used to uniquely identify Internet-based resources, such as hosts, web pages, filenames, and so on; and the Domain Name System, or DNS, which acts as a white pages directory to look up IP addresses when given a host name. These four subsystemstelephone numbers, IP addresses, URIs, and the DNSare part of a larger system, known as ENUM (for Electronic Numbering), which provides telephone-number-to-IP-address translation services. ENUM has been developed by the IETF's Telephone Number Mapping working group, which is part of the Real-Time Applications and Infrastructure Area (see http://www.ietf.org/html.charters/enum-charter.html). The working group's description summarizes their objectives: The ENUM working group has defined a DNS-based architecture and protocol [RFC 3761] by which an E.164 number, as defined in ITU Recommendation E.164, can be expressed as a Fully Qualified Domain Name in a specific Internet Infrastructure domain defined for this purpose (e164.arpa). In other words, ENUM extends the well-known DNS concepts and provides for telephone numbers to be stored in a DNS-compatible format. Once in the DNS database, telephone numbers can be accessed and associated with IP-based services and their URIs. Let's see how this works. Recall from our previous tutorial that an E.164 number can have a maximum of 15 digits. Within North America, these numbers are governed by the North America Numbering Plan Administration (see www.nanpa.com), with a 10-digit format as follows: NXX: Numbering plan area ("area code") NNX: Central office code ("exchange") XXXX: Subscriber number Where N is any number from 2-9, and X is any number 0-9 Adding the Country Code of "1" (which identifies the NANP area) would result in a number such as 1 303 555 1212, typically expressed as +1 303 555 1212 to indicate a complete and international compatible telephone number format. The DNS database structure is logically constructed as a tree, with branches that extend from a center point called the root. The various domains, such as .com, .int, and so on, extend as branches from that root. One of these branches is .ARPA (from the Advanced Research Project Agency, the U.S. Government agency that funded much of the early Internet development), and under .ARPA is a branch designed for E.164-compatible numbers, designated .e164. Thus, the domain "e164.arpa" is designated to store E.164 numbers, with its usage defined in RFC 3761, entitled The E.164 to Uniform Resource Identifiers (URI) Dynamic Delegation Discovery System (DDDS) Application (ENUM) (see ftp://ftp.rfc-editor.org/in-notes/rfc3761.txt). RFC 3761 defines the algorithm that is used to store the telephone numbers within the DNS, as follows: - Remove all characters with the exception of the digits. Thus, + 1 303 555 1212 becomes 1 303 555 1212. - Put dots between each digit: 220.127.116.11.18.104.22.168.2.1.2 - Reverse the order of the digits (since DNS reads addresses from right to left, not left to right: 22.214.171.124.126.96.36.199.0.3.1 - Append the string "e164.arpa" to the end of the sequence: 188.8.131.52.184.108.40.206.0.3.1.e164.arpa. Now that the telephone number has been converted into a DNS-compatible record type, the ENUM process can issue a query on that domain name. One of two results will occur. If a name server is found, a NAPTR (Naming Authority Pointer) record will be retrieved, which will identify the services associated with that number (voice, fax, etc.), and the telephone call can proceed via the Internet. If a name server is not found, an error message is returned, and the call can then be routed for connection via the PSTN. One of the subtleties of ENUM is that it allows a number of applications and services to be associated with a single telephone number. Our next tutorial will continue our examination of ENUM, and look at some of those applications that an ENUM-aware network can provide. Copyright Acknowledgement: © 2006 DigiNet ® Corporation, All Rights Reserved Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
<urn:uuid:cbf28364-4777-4916-8b37-a9620ccfb87e>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/unified_communications/The-VoIP-Peering-Puzzle151Part-6-ENUM-Standards-and-Operation-3648066.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00175-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897501
1,219
3.34375
3
Why Do I Have to Tighten Security on My System? (Why Can't I Just Patch?) - Page 2 The Lifecycle of the Modern Security Vulnerability I. Bug Discovery If we think about the security vulnerabilities that crackers exploit, whether locally or remotely, we realize that they're caused by one thing: a "bug" in either the design or the implementation of a system program. The lifecycle of this vulnerability starts when someone discovers this bug, through whatever method. They may be reading code or reverse-engineering the program, but they might as well be reading Internet RFCs describing a given protocol. In any case, the problem becomes a real possibility at the moment someone discovers this bug. It becomes a little worse if and when this person shares that knowledge with another. II. Vulnerability Discovery Now at some point, possibly seconds later, someone realizes that this "bug" actually leaves a security hole in the program. If this program has privilege, the vulnerability may be exploitable to gain that privilege. Again, the discoverer doesn't necessarily share this knowledge with anyone! III. Exploit Coding - Run arbitrary commands. - Dump a section of memory containing passwords or other privileged information to a file. - Write well-crafted data to the end of a specific file. At this point, the vulnerability has become our problem. The exploit writer now has the capability to break into our machine - and we usually don't even know about the vulnerability. This is not good. IV. Exploit Sharing The exploit coder may share his exploit at this point. He can distribute it privately, among friends and acquaintances. Our problem just got worse, as there are now more people that can break into our machine and we may still not even know about the vulnerability. V. Public Release! Finally, one of the exploit owners may choose to release the exploit publicly, on BugTraq or other security mailing lists and possibly on security web sites. Our problem just got worse, in that now every script kiddie has access to a working exploit. Remember, there are tons of them and they're scanning the net indiscriminately, so we could be a target. But, our situation can finally be improved, in that someone might fix the vulnerability now! Remember, there's no guarantee that the vulnerability/exploit will ever reach this stage! Many exploits are circulated quite privately among cracker groups and thus don't become well-known for some time, if ever. VI. Source Code Patch Once the vulnerability is well-known, someone can code a patch. Often, the patch will be released on BugTraq and/or the vendor web site. This can happen very quickly in the Open Source community, but still often takes 1 hour to 4 days. Further, these source-code level patches are applied only by some sysadmins, who have the time and expertise to patch in this manner. Most admins wait for a vendor- supplied patch or update package. Finally, realize that even for this first group, there has already been a sizable window of opportunity, in which their system could have been cracked. To see this, consider all the time between step III and steps IV, V and VI! In all this time, some number of crackers has had a working method of cracking our machine, usually before we've even heard about it! VII. Vendor Patch Now some number of days, weeks or months later, the vendor will release a patch. At this point our troubles, with this particular vulnerability, are usually over! Remember, though, there has been a sizable window of opportunity between initial coding of the exploit and the vendor patch. In these days, weeks or months, your machine has been rather vulnerable. Given the indiscriminate nature of the script kiddie, there's a very real chance that you could get hit! Let's recapitulate the dangers here: first, many exploits are privately used, but not publicly announced for some time. Second, there's a delay between availability of exploit code and a source code patch. Third, vendors take quite a while to release that patch/update, leaving a large window of vulnerability in which you can be attacked. Fourth, there are a boatload of script kiddies out there, which means that while the exploit is publicly available, there's a number of people firing it indiscriminately against many random machines on the Internet. The only real way that we can stop the script kiddie is to actually take some proactive action. Really Stopping the Kiddie! Now that you realize that you've got to do something proactive to stop the script kiddie, let's consider what you can do. First, if you're on a Linux system, run Bastille Linux (shameless plug!). Bastille can harden a system for you very effectively with a minimum of hassle - it'll also teach you a fair deal in the process! You can also harden a system by hand, though it's likely to be less comprehensive than a Bastille run, unless you're using a very well- written checklist. If you do this all by hand, keep in mind these minimum important steps: - Firewall the box - if possible, do this both on the box and on your border router to the Internet. - Patch, patch, patch and patch some more. Automate this process, if possible, to warn you of new patches as soon as they're released. Please remember that the window of vulnerability is large enough without a sysadmin waiting two-four weeks to apply patches... - Perform a Set-UID root audit of the system, to clear up as many (local) paths to root as possible. I show how to do this and perform one for Red Hat 6.x in my previous SecurityPortal.com art icle. - Deactivate all unnecessary network services/daemons, minimizing the possibility of remote exploits! - Tighten the configurations of all remaining network services/daemons to better constrain remote exploits. - Harden the core O/S itself, through PAM settings, boot security settings and so on... - Educate the sysadmin and end users! As I said, Bastille does this stuff very well. Here's a real-world example of how hardening a box can be so much more effective than only patching: Red Hat 6.0 shipped with a BIND named daemon that was vulnerable to a remote root exploit. This vulnerability was unknown at the time, so no patch existed for a little while. If you had run Bastille, it had minimized the risk from any BIND exploit, known or unknown, by setting BIND to run as a non-root user in a "chroot" prison. When the exploit came out, people who hadn't hardened BIND were vulnerable to a remote root grab. Thousands of machines, at least, were rooted before patches were released and applied. If you had hardened ahead of time, by running Bastille or otherwise, the root grab failed. This example is just one of several - there were a few ways to root a Red Hat 6.0 box, all of which could be minimized by judicious hardening. 1 Actually, there are also hybrid exploits, where he gets an unprivileged shell on the system from some network daemon, without ever logging in but these are a hybrid type. Our script kiddie generally doesn't use this stuff, though he may if he's bright or has a good text file instructing him. Jay Beale is the Lead Developer of the Bastille Linux Project (http://www.bastille- linux.org). He is the author of several articles on Unix/Linux security, along with the upcoming book Securing Linux the Bastille Way, to be published by Addison Wesley. At his day job, Jay is a security admin working on Solaris and Linux boxes. You can learn more about his articles, talks and favorite security links via http://www.bastille- linux.org/jay. SecurityPortal is the world's foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks. The Focal Point for Security on the Net (tm)
<urn:uuid:b7da5aa8-ce5d-4889-b0b6-270c6afe7d78>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/article.php/10952_624511_2/Why-Do-I-Have-to-Tighten-Security-on-My-System-Why-Cant-I-Just-Patch.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00083-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954187
1,699
2.96875
3
A new algorithm uses cutting-edge techniques to help computers identify human activity from video input far more quickly and efficiently than previous systems. Its inventors, MIT post-doc Hamed Pirsiavash and University of California at Irvine professor Deva Ramanan, will present the algorithm at the Conference on Computer Vision and Pattern Recognition in Columbus, Ohio next month, according to a statement from MIT. +ALSO ON NETWORK WORLD: Microsoft: We're serious this time; XP's dead to us | Net neutrality advocates flood FCC Twitter chat + The researchers drew on natural language processing techniques similar to those used in IBM's Watson and other emergent machine learning projects to create a "grammar" for each action they wanted the system to recognize. Pirsiavash and Ramanan's creation scales search times in a linear way, meaning that a video 10 times the length of another video will take 10 times as long to search some previous techniques would have taken 1,000 times as long. Additionally, the new algorithm can handle streaming video, because it can guess fairly accurately at the results of partial actions before they are completed. Pirsiavash said in the statement that the process is much like the one a system such as Watson would use to diagram a sentence. Complicated actions are broken down into their component parts and the algorithm simply looks for a pattern that fits the grammar."When you make tea, for instance, it doesn't matter whether you first put the teabag in the cup or put the kettle on the stove. But it's essential that you put the kettle on the stove before pouring the water into the cup," he said. Pirsiavash told Network World that he doesn't know when his algorithm might show up in real-world applications, but said it's definitely going to do so at some point. "There are many companies working on commercializing computer vision systems," he said. "I am sure automatic action recognition will also be used in real products soon." Email Jon Gold at email@example.com and follow him on Twitter at @NWWJonGold. Read more about data center in Network World's Data Center section.
<urn:uuid:eee2b02a-4325-43af-b90f-6f8cf0f67aec>
CC-MAIN-2017-04
http://www.computerworld.com.au/article/545104/computer_knows_re_bowling_new_algorithm_identifies_human_activity_from_video/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950661
448
2.90625
3
Swift development began in secret in 2010 by Apple employee Chris Lattner who worked hard on his evenings and weekends to create a new way of designing and building computer software. Throughout the initial Swift development, he kept it a closely guarded secret until he revealed it to Apple executives who initially fortified Lattner’s project with more experienced Apple engineers before completely shifting gears and making Swift a major Apple focus about a year and a half after Lattner’s initial revelation. Swift incorporates ideas from a number of existing programming languages, particularly C#, Python and Ruby and can be characterized as “Objective-C without the C” since Swift is largely Objective-C with some different syntax. Released publicly in 2014, Swift is a general-purpose, multi-paradigm, compiled programming language that is designed to work with Apple’s Cocoa and Cocoa Touch frameworks in addition to large body of extant Objective-C (ObjC) code written for Apple products. Unlike the lukewarm developer response to Google’s Go language debut in 2009, Swift has quickly “caught fire,” even while it was only available to a small amount of coders with 2,400 Swift projects on GitHub. With soaring popularity of Apple devices, coupled with the fact that Swift was one of the biggest announcements of the 2014 WWDC, it was clear that Swift was about to become the next big thing as programmers had a huge incentive to adopt this new language. All of this contributed to Wired’s July 2014 headline (prior to Swift’s public release), that “Apple’s Swift Language Will Instantly Remake Computer Programming.” Swift won first place for Most Loved Programming Language in the Stack Overflow Developer Survey 2015 and second place in 2016. In terms of platform support, Swift can be ported across a wide range of platforms and devices. Fueled by his research and experience with the Low Level Virtual Machine (LLVM) compiler infrastructure, Lattner began his stealth development of the new programming language what would become Swift. Swift was designed to be “more resilient to erroneous code (“safer”) than Objective-C, and more concise,” Lattner said. Reasons why Swift earned the support of other Apple engineers after Lattner informed management about his project included the fact that Apple saw a language that was not only compatible with existing Objective-C frameworks, but also most of the novel features found in the prevailing programming languages that were introduced in the two preceding decades. By the time that Swift was initially developed, language and compilers had taken over the “dirty work” that would have initially had to have been done by the developers themselves which made now an opportune time to develop, and introduce, a simpler language which was easier for programmers. For Apple, Swift provided an opportunity to give Apple developers a powerful and intuitive programming language which is “interactive and fun, [with] syntax is concise yet expressive… [while] safe by design, yet also produces software that runs lightning-fast.” Swift is announced at WWDC 2014 Major news broke in April 2016 that Swift may be adopted by Google as a “first class” language for Android. This announcement came a few months after executives from Google, Facebook and Uber met to discuss making Swift more central to their operations. Since its humble origins as Lattner’s passionate side project, Swift has grown into a giant with growing interest and adoption from IBM, Lyft, Firefox, LinkedIn, Coursera and other major corporations. With the amount of trust and sensitive personal data users put into the countless applications on our iPhone and other ‘iDevices’, it’s critical that all applications written Swift are secure against any threats and free from high-risk vulnerabilities. Due to the fact that Swift is essentially Objective-C with a different syntax, many of the same vulnerabilities that threaten Objective-C code also arise in applications written in Swift. Mobile application security is a serious issue as our phones and tablets are important extensions of our lives and contain everything a hacker would need to steal our identity, savings, sensitive personal data and more. Two alarming findings in the 2015 Ponemon report highlights how wide the gap between developers and proper mobile application security is, with one-third of the 640 organizations responding that they never test their apps for security issues before deployment, and that the vast majority of the surveyed companies test less than half of the applications they deploy at all. With the amount of damage that can be done to your company’s reputation and your user’s data, it’s critical that you scan your Swift code for any potential vulnerabilities before it goes to production and the best, fastest and most effective way to do that is by using a static code analysis tool that can integrate at all stages in your software development lifecycle (SDLC) within the tools that your developers are already using. Checkmarx’s CxSAST is a static code analysis solution that supports Swift out of the box. Since Swift is the Objective-C with slightly different syntax, the Checkmarx scanner interprets Swift to Objective-C in the backend before scanning the code. As a result, Checkmarx scans Swift code for over 60 quality and security issues, including twelve of the most severe and most common issues that cannot be left unfixed. Checkmarx develops solutions used by developers and security professionals to identify and fix vulnerabilities in web and mobile applications early in the development lifecycle. We provide an easy and effective way for organizations to automate security testing within their Software Development Lifecycle (SDLC) which systematically eliminates software risk before applications are released. Amongst the company’s 1,000 customers are 5 of the world’s top 10 software vendors and many Fortune 500 and government organizations, including SAP, Samsung, Salesforce.com and the US Army. Are you developing in Swift? Learn more by reading 40 Tips You Must Know About Secure iOS App Development Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now. Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you.
<urn:uuid:eaf9e70f-60fd-416f-ae4f-6ca89f51a891>
CC-MAIN-2017-04
https://www.checkmarx.com/sast-supported-languages/swift-security-vulnerabilities-and-language-overview/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945843
1,390
2.515625
3
IBM announced plans to use its Watson cognitive computing technology to tap big data opportunities in Africa. IBM announced its next major chapter for Watson–fueling economic development and sparking new business opportunities across Africa. This announcement comes on the heels of IBM’s $1 billion bet to create the Watson Group, which will accelerate efforts around the new era of cognitive computing Now, IBM has launched a 10-year initiative to bring Watson and other cognitive systems to Africa. Dubbed “Project Lucy ” after the earliest known human ancestor, IBM will invest $100 million in the initiative. Lucy is the common name of AL 288-1, several hundred pieces of bone representing about 40 percent of the skeleton of a female hominid estimated to have lived 3.2 million years ago. “In the last decade, Africa has been a tremendous growth story--yet the continent's challenges, stemming from population growth, water scarcity, disease, low agricultural yield and other factors, are impediments to inclusive economic growth,” said Kamal Bhattacharya, director of IBM Research–Africa, in a statement. “With the ability to learn from emerging patterns and discover new correlations, Watson's cognitive capabilities hold enormous potential in Africa–helping it to achieve in the next two decades what today's developed markets have achieved over two centuries.” technologies will be deployed from IBM's Africa Research laboratory providing researchers with resources to help develop commercially viable solutions in areas such as health care, education, water and sanitation, human mobility and agriculture. Moreover, to help fuel the cognitive computing market and build an ecosystem around Watson, IBM said it also will establish a new pan-African Center of Excellence for Data-Driven Development (CEDD) and is recruiting research partners such as universities, development agencies, start-ups and clients in Africa and around the world. By joining the initiative, IBM’s partners will be able to tap into cloud -delivered cognitive intelligence that will be invaluable for solving the continent’s most pressing challenges and creating new business opportunities. “For Africa to join, and eventually leapfrog, other economies, we need comprehensive investments in science and technology that are well integrated with economic planning and aligned to the African landscape,” said Professor Rahamon Bello, vice chancellor of the University of Lagos. “I see a great opportunity for innovative research partnerships between companies like IBM and African organizations, bringing together the world’s most advanced technologies with local expertise and knowledge.” IBM has increased its investment across Africa in recent years, culminating in its first African IBM Research lab in Nairobi, Kenya. Africa is witnessing the emergence of African Lions--countries that are spearheading high levels of economic growth through innovation and which are set to boost industrial growth by an estimated $400 billion by 2020. New figures show that increased connectivity and technology use will radically transform sectors as diverse as agriculture, retail and health care—and contribute as much as $300 billion a year to Africa’s Gross Domestic Product (GDP) by 2025, according to a McKinsey report. Over the last five years, IBM has continued to make strategic investments in the African technology space as it seeks to provide governments, businesses and academia with enhanced access to the high-end technologies that will power their economies’ growth. Moreover, Big Blue said big data technologies have a major role to play in Africa’s development challenges: from understanding food price patterns, to estimating (GDP) and poverty numbers, to anticipating disease–the key is turning data into knowledge and actionable insight. “The next wave of development in Africa requires a new collaborative approach where nonprofit and commercial organizations like RTI and IBM work together to consolidate, analyze and act upon the continent’s data,” said Aaron Williams, executive vice president of International Development at RTI International, in a statement. “Data-driven development has the potential to improve the human condition and provide decision makers with the insight they need to make more targeted interventions.”
<urn:uuid:de4135ea-2e3d-45f5-a471-72b2dcab823b>
CC-MAIN-2017-04
http://www.eweek.com/database/ibm-takes-watson-to-africa-for-project-lucy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936757
846
2.6875
3
Cyber security in this modern time has become a momentous issue. Now that we’re in a digital age, our country’s retail infrastructure is quickly transitioning from credit cards to one-click shopping. The positive opportunities and rewards for each of us are clear such that we can save time, money, and more choices for even better products. But whilst doing that,we tend to face the personal risk and other exposures of our confidential details to the web like bank details,personal email,home address,etc. As the cyber threat environment evolves, threat protection must also evolve as well. With the emergence of targeted attacks and advanced persistent threats, it is clear that a new approach to cyber security is required. Traditional techniques are simply no longer adequate to secure data against cyber attacks. Here are the ways they operate : - Social – Targeting and attacking specific people with social engineering and advanced malware - Stealthy – Executed in a series of low profile moves that are undetectable to standard security or buried among thousands of other event logs collected every day - Sophisticated ways in – Exploiting vulnerabilities, using backdoor controls, stealing and using valid credentials Cyber crimes are becoming the new normal nowadays.so what makes you think that you will be spared by cyber criminals?To help protect yourself and those around you, you ought to be aware of online risks and the simple steps you can take against cyber threats. Read below for tips on how to safe guard yourself from these common cyber attacks. Steps to take in order to be secured from cyber crime are as follows: - Connect securely wherever you are: only connect to the internet over secured browser. - Don’t just click anything you see on the web like clicking on links or pop-ups, open attachments, or respond to emails from strangers. Don’t do it - Respond only to trusted messages: don’t reply to any random messages from strangers,think before you reply. - Always change your password regularly. - Keep your private information safe: Take extra precaution in giving information to unsolicited callers, there is a constant threat to your personal data whether you are on the go (cell phone, wallet, laptop) or at home (PC, home phone). - Whilst on social media,Think before you post,Limit the amount of personal information you post publicly like informations that would make you vulnerable, such as your address or information about your schedule or routine. If your friend posts information about you, make sure the information is something that you are comfortable sharing with strangers. - Be conscious and use your privacy settings well: Take advantage of privacy and security settings,Use site settings to limit the information you share with the general public online. - Finally, signup real for real time alerts: Go to your bank account or credit card home page and set a purchase limit on your debit/credit card. Most banks and credit card companies have real time notification services that allow them to contact you in the event of a purchase attempt deemed “unusual.” Find this post helpful,don’t forget to like and share our page on social media…You can visit our homepage here to access other posts.
<urn:uuid:bfeca2d6-0b6b-4a02-8e12-a7647140e7b5>
CC-MAIN-2017-04
http://mikiguru.com/cyber-crime-how-to-protect-yourself-against-cyber-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925155
661
3.21875
3
As with the previous courses we've done, this program is taught by researchers from our Helsinki Security Lab. The program teaches students about what malicious code is, how it can be analyzed, and how to reverse engineer executable code for different platforms, such as Windows and Android. Students will explore a variety of topics, including binary obfuscation and exploits. The course will also include non-technical topics such as ethics and legal issues related to information security. As is usual for our courses, students get a very hands-on approach to learning, which includes solving reverse engineering puzzles like the one created by our own researchers below: On the other side of the world in Kuala Lumpur, Malaysia – where our other Security Lab is located – we are also collaborating with lecturers from Monash University's School of Information Technology (Sunway Campus) to launch a similar course. For the first time, students will be offered a Malware Analysis course, with a syllabus that places a greater focus on analyzing malware targeting the Android platform. This course will include brand new lecture and lab materials to help students gain a broader perspective of this field and develop the specialized skills needed for analyzing malware. Subjects covered in the lectures and lab sessions include understanding the Android security framework, its operating and file systems and static and dynamic analysis of malware.
<urn:uuid:c835cf93-017c-44b7-9739-47ef3b0d81a2>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00002490.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942294
265
2.796875
3
PNRP name resolution protocol uses this two steps: - Endpoint determination – In this step the peer is determining the IPv6 address of the computer network card on which the PNRP ID service is published. - PNRP ID resolution – After locating and testing the reachability of the peer with the PNRP ID with desired PNRP service, the requesting computer sends a PNRP Request message to that peer for the PNRP ID of the desired service. Other side is sending a reply in which it confirms the PNRP ID of the requested service. It also sends a comment, and up to 4 kilobytes of additional information in that reply. Using the comment and additional 4 kilobytes there can be some custom information sent back to the requestor about the status of server or computer services. In the process of discovering needed neighbor, PNRP is making an iterative process in which it locates all nodes that have published their PNRP ID. The node performing the resolution is in charge of communicating with the nodes that are closer to the target PNRP ID.
<urn:uuid:34c1b98a-511c-40bc-8ea0-cda90c9bcc7e>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/pnrp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00122-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928187
226
2.640625
3
The Linux kernel is a monolithic Unix-like computer operating system kernel. The Linux operating system is based on it and deployed on both traditional computer systems such as personal computers and servers, usually in the form of Linux distributions, and on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs and NAS appliances. The Android operating system for tablet computers, smartphones and smartwatches is also based atop the Linux kernel. Linux allows the Kernel to be configured at run time, to enable or disable different services as you see fit. This way you don’t have to compile a monolithic kernel, and can save some memory usage. Some modules you’ll only need for a short time, others you’ll need all the time. You can configure your Linux machine to load kernel modules on startup so you don’t have to remember to do that when (if) you reboot. There are a few commands that allow you to manipulate the kernel. Each is quickly described below, for more information say `man [command]`. - depmod – handle dependency descriptions for loadable kernel modules. - insmod – install loadable kernel module. - lsmod – list loaded modules. - modinfo – display information about a kernel module. - modprobe – high level handling of loadable modules. - rmmod – unload loadable modules. The usage of the commands is demonstrated below, it is left as an exercise to the reader to fully understand the commands. Using Module Commands Below the different kernel module commands are demonstrated # Show the module dependencies. depmod -n # Install some module insmod --autoclean [modnam] # This lists all currently loaded modules, lsmod takes no useful parameters lsmod # Display information about module eepro100 modinfo --author --description --parameters eepro100 # Removing a module (don't use the example) rmmod --all --stacks ip_tables Module Configuration Files The kernel modules can use two different methods of automatic loading. The first method (modules.conf) is my preferred method, but you can do as you please. - modules.conf – This method load the modules before the rest of the services, I think before your computer chooses which runlevel to use - rc.local – Using this method loads the modules after all other services are started Using ‘modules.conf’ will require you to say `man 5 modules.conf`. Using ‘rc.local’ requires you to place the necessary commands (see above) in the right order. # modules.conf - configuration file for loading kernel modules # Create a module alias parport_lowlevel to parport_pc alias parport_lowlevel parport_pc # Alias eth0 to my eepro100 (Intel Pro 100) alias eth0 eepro100 # Execute /sbin/modprobe ip_conntrack_ftp after loading ip_tables post-install ip_tables /sbin/modprobe ip_conntrack_ftp # Execute /sbin/modprobe ip_nat_ftp after loading ip_tables post-install ip_tables /sbin/modprobe ip_nat_ftp #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. /sbin/insmod ip_tables /sbin/modprobe ip_conntrack_ftp /sbin/modprobe ip_nat_ftp You should see/know that modules are necessary. They can be loaded via ‘modules.conf’ or ‘rc.local’, but ‘modules.conf’ load them first and ‘rc.local’ loads them last. Using the various module commands you can add, remove, list or get information about modules.
<urn:uuid:0dd8d65b-0b62-4cbf-ba5c-fa605b651456>
CC-MAIN-2017-04
https://www.blackmoreops.com/2016/11/03/things-to-know-about-linux-kernel-module/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+bmofeed+%28blackMORE+Ops%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.68771
865
3.515625
4
If you’re not careful and you don’t use anti-malware software, you might end up with various viruses, Trojans and worms on your computer. But, according to Bitdefender researchers, you might even get saddled with a hybrid or two of this different types of malware. The researchers have dubbed these hybrids “frankenmalware”, and out of some 10 million detected and analyzed malicious files, they identified over 40,000 of these “malware sandwiches”. How does it happen, you ask? “A virus infects executable files; and a worm is an executable file,” explains Loredana Botezatu. “If the virus reaches a PC already compromised by a worm, the virus will infect the exe files on that PC – including the worm. When the worm spreads, it will carry the virus with it. Although this happens unintentionally, the combined features from both pieces of malware will inflict a lot more damage than the creators of either piece of malware intended.” To explain how the symbiosis works, she shares the example of the Virtob virus/ Rimecud worm “collaboration”. The Rimecud worm spreads via file-sharing apps, USB devices, Microsoft MSN Messenger and locally mapped network drives. Besides that, it also steals passwords by injecting itself into the explorer.exe process, opens a backdoor that will allow it to download additional malware from a C&C server and – if the computer has remote control software installed – allows cyber criminals to access it and control it. As it turns out, Bitdefender has recently begun spotting the Virtob virus attached to the aforementioned worm. The virus – which also opens a backdoor, contacts IRC C&C servers, modifies a host of files – infects executable files and, as the worm itself is an executable, it is also likely to be infected. Now, apart from the unfortunate fact that a computer hosting this piece of “frankenmalware” is now contacting two C&C servers, has two backdoors open, two attack techniques active and various spreading methods at its disposal, there is also the problem of whether AV solutions will be able to detect and remove both – or either one. “Imagine that a worm is infected by a file infector (virus),” posits Botezatu. “And an AV detects the file infector first and tries to disinfect the files, which include the worm. In some rare cases disinfecting compromised files leaves behind clean files that are at the same time altered (not identical to the original anymore). They maintain their functionality but are slightly different in form. As most files are detected according to signatures and not based on their behavior (heuristically), an altered worm (disinfected along with other files that have been compromised by a file infector and disinfected by an antivirus) may not be caught anymore by the signature applied to the original file (that had been modified after disinfection). Disinfection might this way lead to a mutation that can actually help the worm.”
<urn:uuid:7a239203-e5a4-45d4-ac54-093c0d6c3b4b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/01/25/frankenmalware-active-in-the-wild/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940766
652
2.875
3
Could this be the breakthrough in data privacy the world has awaited for so long? A researcher with IBM (Armonk, N.Y.) claims to have developed a method for handling encrypted data without actually revealing the content. The technique is called privacy homomorphism, or fully homomorphic encryption, and makes possible the deep and unlimited analysis of encrypted information without sacrificing confidentiality, says IBM. The solution was formulated by IBM researcher Craig Gentry and uses a mathematical object called an ideal lattice, to allow people to fully interact with encrypted data in ways previously thought impossible. The implications of the technique mean that computer vendors storing the confidential, electronic data of others will be able to fully analyze data on their client's behalf without expensive interaction with the client, and without seeing any of the private data. With Gentry's technique, states a release, the analysis of encrypted information can yield the same detailed analysis as if the original data was fully visible to all. According to IBM, the solution could help strengthen the business model of cloud computing, where a computer vendor is entrusted to host the confidential data of others in a ubiquitous Internet presence. It could also potentially enable other applications, such as filters to identify spam, even in encrypted e-mail, or protecting information contained in electronic medical records. The breakthrough might also one day enable computer users to retrieve information from a search engine without the search engine knowing precisely what was requested. "Fully homomorphic encryption is a bit like enabling a layperson to perform flawless neurosurgery while blindfolded, and without later remembering the episode," said Charles Lickel, VP of software research at IBM, in a statement. "We believe this breakthrough will enable businesses to make more informed decisions, based on more studied analysis, without compromising privacy. We also think that the lattice approach holds potential for helping to solve additional cryptography challenges in the future."
<urn:uuid:ab09cce4-1637-462a-bbab-952d4a5cd961>
CC-MAIN-2017-04
http://www.banktech.com/ibm-develops-technique-to-enable-confidential-processing-of-encrypted-data/d/d-id/1293010
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94218
379
2.703125
3
A fiber optic patch cord, also known as fiber jumper or fiber patch cord, terminated with fiber optic connectors on both cable ends, used to achieve wired connections between devices. Only one end that can attach to fiber connectors is called Fiber Pigtail. Fiber patch cables, namely fiber optic connectors access to the fiber optical modules, have variety types and can not interoperate with each other. Fiber optic patch cables are divided into different types based on connector types. In the following is detailed description of several common optical connectors used in network. 1. FC fiber optic patch cord: Using a metal sleeve to strengthen exterior, is a screw type connection. Commonly used in ODF (Optical Distribution Frame) side. 2. SC fiber patch cord: Connect to GBIC optical modules, is rectangular shell, is with a locking tab on the cable termination. It is a push and pull type fiber optic connector, without rotation. Mostly used on router switches. 3. ST fiber patch cord: Commonly used in fiber optic patch panels, rounded shell, is with straight tip type terminations. For 10Base-F connections, usually use ST fiber connectors, which is commonly used in fiber optical distribution frames. 4. LC fiber patch cable: Connect to the SFP modules, is a push and latch structure, it adopts modular jack (RJ) latch mechanism which is easily to operate. Commonly used in routers. 5. MT-RJ fiber jumpers: MT-RJ features two-fiber connection, that is to say, two fiberglass connection within one MT-RJ fiber optic connector; another special point is MT-RJ is with plastic housing and plastic ferrule. ST and SC connectors are commonly used in the general network. ST head is inserted into the post-rotation half with a bayonet mount, the disadvantage is easily broken; SC connector plugged directly, very easy to use, the disadvantage is easy to swap out; FC connector is general used in telecommunications network, there is a nut screwed onto the adapter, the advantage is a solid, anti-dust, the disadvantage is that installation is a little longer. MTRJ fiber optic patch cable consists of two high-precision plastic molded connector and cable. External parts of connectors are precision plastic parts, including push-pull plug locking mechanism, applies to indoor applications in the telecommunications and data network system. There are variety types of fiber connectors, in addition to the five types described above, there are ST, MU and so on. At the label of fiber optic pigtail connector, we can often see the “FC / PC”, “SC / PC” and so on, but they mean what? 1.The front part of “/” means the connector type of fiber pigtail. “SC” connector is a standard square connector, using engineering plastics, high temperature resistance, not easily oxidized. Transmission equipment sidelight interface is generally with SC connector. “LC” connector is similar shape with SC connector and smaller than SC connector. “FC” connector is a metal joint, usually used in the ODF side, the metal connectors pluggable times is more than plastic. 2 followed “/” means fiber optic connector section process, refers to grind mode. “PC” is the most widely used in the telecom operator’s devices, the joint cross-section is flat. The attenuation “UPC” is smaller than “PC”, generally used for devices with special needs, some foreign manufacturers’ ODF internal fiber jumpers use FC / UPC, mainly to improve the indicators of ODF device itself. In addition, “APC” is more often used in broadcasting and early CATV, whose pigtail head use angled face, can improve the quality of television signals, mainly because TV signals are analog optical modulation, when the joint coupling plane is vertical, the reflected light is back along the original path. Due to the uneven distributed fiber index will once again return to the coupling surface, although energy is very small but due to the analog signal is not completely eliminate the noise, so it is equivalent a weak signal with a delay superimposed on the original clear signal, shown on the screen is ghosting. Fiber Pigtail head with angle can let the reflected light can not return along the original path. General digital signal generally does not have this problem.
<urn:uuid:7a6001c6-4e6d-408f-bc2f-dc2217aaa881>
CC-MAIN-2017-04
http://www.fs.com/blog/four-types-connectors-of-fiber-optic-patch-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931829
921
2.65625
3
News Article | August 26, 2016 The genome-editing system known as CRISPR allows scientists to delete or replace any target gene in a living cell. MIT researchers have now added an extra layer of control over when and where this gene editing occurs, by making the system responsive to light. With the new system, gene editing takes place only when researchers shine ultraviolet light on the target cells. This kind of control could help scientists study in greater detail the timing of cellular and genetic events that influence embryonic development or disease progression. Eventually, it could also offer a more targeted way to turn off cancer-causing genes in tumor cells. “The advantage of adding switches of any kind is to give precise control over activation in space or time,” said Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Electrical Engineering and Computer Science at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research and its Institute for Medical Engineering and Science. Bhatia is the senior author of a paper describing the new technique in the journal Angewandte Chemie. The paper’s lead author is Piyush Jain, a postdoc in MIT’s Institute for Medical Engineering and Science. Before coming to MIT, Jain developed a way to use light to control a process called RNA interference, in which small strands of RNA are delivered to cells to temporarily block specific genes. “While he was here, CRISPR burst onto the scene and he got very excited about the prospect of using light to activate CRISPR in the same way,” Bhatia said. CRISPR relies on a gene-editing complex composed of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut. When Cas9 and the guide RNA are delivered into cells, a specific cut is made in the genome; the cells’ DNA repair processes glue the cut back together but permanently delete a small portion of the gene, making it inoperable. In previous efforts to create light-sensitive CRISPR systems, researchers have altered the Cas9 enzyme so that it only begins cutting when exposed to certain wavelengths of light. The MIT team decided to take a different approach and make the binding of the RNA guide strand light-sensitive. For possible future applications in humans, it could be easier to deliver these modified RNA guide strands than to program the target cells to produce light-sensitive Cas9, Bhatia said. “You really don’t have to do anything different with the cargo you were planning to deliver except to add the light-activated protector,” she said. “It’s an attempt to make the system much more modular.” To make the RNA guide strands light-sensitive, the MIT team created “protectors” consisting of DNA sequences with light-cleavable bonds along their backbones. These DNA strands can be tailored to bind to different RNA guide sequences, forming a complex that prevents the guide strand from attaching to its target in the genome. When the researchers expose the target cells to light with a wavelength of 365 nanometers (in the ultraviolet range), the protector DNA breaks into several smaller segments and falls off the RNA, allowing the RNA to bind to its target gene and recruit Cas9 to cut it. In this study, the researchers demonstrated that they could use light to control editing of the gene for green fluorescent protein (GFP) and two genes for proteins normally found on cell surfaces and overexpressed in some cancers. “If this is really a generalizable scheme, then you should be able to design protector sequences against different target sequences,” Bhatia said. “We designed protectors against different genes and showed that they all could be light-activated in this way. And in a multiplexed experiment, when a mixed population of protectors was used, the only targets that were cleaved after light exposure were those being photo-protected.” This precise control over the timing of gene editing could help researchers study the timing of cellular events involved in disease progression, in hopes of determining the best time to intervene by turning off a gene. “CRISPR-Cas9 is a powerful technology that scientists can use to study how genes affect cell behavior,” said James Dahlman, an assistant professor of biomedical engineering at Georgia Tech, who was not involved in the research. “This important advance will enable precise control over those genetic changes. As a result, this work gives the scientific community a very useful tool to advance many gene editing studies.” Bhatia’s lab is also pursuing medical applications for this technique. One possibility is using it to turn off cancerous genes involved in skin cancer, which is a good target for this approach because the skin can be easily exposed to ultraviolet light. The team is also working on a “universal protector” that could be used with any RNA guide strand, eliminating the need to design a new one for each RNA sequence, and allowing it to inhibit CRISPR-Cas9 cleavage of many targets at once. The research was funded by the Ludwig Center for Molecular Oncology, the Marie-D. and Pierre Casimir-Lambert Fund, a Koch Institute Support Grant from the National Cancer Institute, and the Marble Center for Cancer Nanomedicine. Researchers have identified a gene that increases the risk of schizophrenia, and they say they have a plausible theory as to how this gene may cause the devastating mental illness. After conducting studies in both humans and mice, the researchers said this new schizophrenia risk gene, called C4, appears to be involved in eliminating the connections between neurons — a process called "synaptic pruning," which, in humans, happens naturally in the teen years. It's possible that excessive or inappropriate "pruning" of neural connections could lead to the development of schizophrenia, the researchers speculated. This would explain why schizophrenia symptoms often first appear during the teen years, the researchers said. Further research is needed to validate the findings, but if the theory holds true, the study would mark one of the first times that researchers have found a biological explanation for the link between certain genes and schizophrenia. It's possible that one day, a new treatment for schizophrenia could be developed based on these findings that would target an underlying cause of the disease, instead of just the symptoms, as current treatments do, the researchers said. "We're far from having a treatment based on this, but it's exciting to think that one day, we might be able to turn down the pruning process in some individuals and decrease their risk" of developing the condition, Beth Stevens, a neuroscientist who worked on the new study, and an assistant professor of neurology at Boston Children's Hospital, said in a statement. The study, which also involved researchers at the Broad Institute's Stanley Center for Psychiatric Research at Harvard Medical School, is published today (Jan. 27) in the journal Nature. [Top 10 Mysteries of the Mind] From previous studies, the researchers knew that one of the strongest genetic predictors of people's risk of schizophrenia was found within a region of DNA located on chromosome 6. In the new study, the researchers focused on one of the genes in this region, called complement component 4, or C4, which is known to be involved in the immune system. Using postmortem human brain samples, the researchers found that variations in the number of copies of the C4 gene that people had, and the length of their gene, could predict how active the gene was in the brain. The researchers then turned to a genome database, and pulled information about the C4 gene in 28,800 people with schizophrenia, and 36,000 people without the disease, from 22 countries. From the genome data, they estimated people's C4 gene activity. They found that the higher the levels of C4 activity were, the greater a person's risk of developing schizophrenia was. The researchers also did experiments in mice, and found that the more C4 activity there was, the more synapses were pruned during brain development. Previous studies found that people with schizophrenia have fewer synapses in certain brain areas than people without the condition. But the new findings "are the first clear evidence for a molecular and cellular mechanism of synaptic loss in schizophrenia," said Jonathan Sebat, chief of the Beyster Center for Molecular Genomics of Neuropsychiatric Diseases at the University of California, San Diego, who was not involved in the study. Still, Sebat said that the studies in mice are preliminary. These experiments looked for signs of synaptic pruning in the mice but weren't able to directly observe the process occurring. More detailed studies of brain maturation are now needed to validate the findings, Sebat said. In addition, it remains to be seen whether synaptic pruning could be a target for antipsychotic drugs, but "it's promising," Sebat said. There are drugs in development to activate the part of the immune system in which C4 is involved, Sebat noted. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Sometimes chemists set themselves up for a surprise. Following sets of experiments in which something doesn’t happen and doesn’t seem likely to happen, they soon believe it never will. Until it does. Chemists have traditionally thought of cyclopentadienyl ligands as being “innocent,” which means they offer electronic support to a metal catalyst but generally don’t do anything chemically. The two groups were studying reactions involving Cp*Rh(bipyridine), often used in hydrogenation reactions and in hydrogen-forming reactions, when they found that the expected metal hydride intermediate was followed by formation of an unexpected intermediate in which the hydrogen had migrated to one of the carbon atoms in the Cp* ring. “These two reports showing that the seemingly innocent Cp* ligand can reversibly form a C–H bond by proton transfer from rhodium hydride are remarkable,” comments chemistry professor David Milstein of the Weizmann Institute of Science, who was not involved in the research. “Considering the ubiquity of cyclopentadienyl metal complexes in homogeneous catalysis, this pathway should be seriously considered in the design and understanding of reactions in which proton/hydride transfer may be involved.” Alexander J. M. Miller of the University of North Carolina, Chapel Hill, who led one of the teams, says chemists had previously worked out mechanisms involving hydride intermediates that made sense and thought the story ended there. But they did not exercise due diligence and poke around enough to see that a protonated Cp* intermediate, denoted Cp*H, could be involved as well. “What’s more surprising,” Miller points out, “the Cp*H complex is not a dead end. This diene complex is still an active catalyst.” Miller’s group came across the Cp*H intermediate while investigating hydride transfer reactions with the cellular enzyme cofactor nicotinamide adenine dinucleotide (NAD+) to form the reduced product NADH (Chem. Commun. 2016, DOI: 10.1039/c6cc00575f). Meanwhile, a team led by Harry B. Gray and Jay R. Winkler at Caltech and James D. Blakemore at the University of Kansas discovered the Cp*H intermediate while investigating the coupling of protons to form H when treating Cp*Rh(bipyridine) with acid (Proc. Natl. Acad. Sci. USA 2016, DOI: 10.1073/pnas.1606018113). “These discoveries illustrate the versatility of mechanisms by which protons and hydrides can be delivered to and from metals,” comments Morris Bullock, director of the Center for Molecular Electrocatalysis at Pacific Northwest National Laboratory. “While these examples are for rhodium, the prevalence of cyclopentadienyl ligands in organometallic catalysts raises the possibility that similar reactivity could be widespread and involve other metals, and may be intentionally exploited in the design of new catalysts.” Cardiac glycosides, which are bioactive natural products found in certain plants and insects, aid in cardiac treatment because they cause the heart to contract and increase cardiac output. They are used in prescription medications such as Digitoxin and Strophanthin. Now researchers at Yale have also discovered that cardiac glycosides block the repair of DNA in tumor cells. Because tumor cells are rapidly dividing, their DNA is more susceptible to damage, and inhibition of DNA repair is a promising strategy to selectively kill these cells. Several other researchers have noted that cardiac glycosides possess anticancer properties, but the basis for these effects was not well known. The Yale scientists showed that cardiac glycosides inhibit two key pathways that are involved in the repair of DNA. "We performed a high-content drug screen with the Yale Center for Molecular Discovery, which identified some interesting cardiac drugs that affect DNA repair," said Ranjit Bindra, assistant professor of therapeutic radiology and of pathology at the Yale School of Medicine. "This has many therapeutic implications for new cancer drugs." Bindra and Yale professor of chemistry Seth Herzon are the principal investigators of the study, which appears in the Journal of the American Chemical Society. Herzon and Bindra also are members of the Yale Cancer Center. "Our approach focused on damaging the cancer cells' DNA using radiation, and then measuring the rate of repair in the presence of different compounds. All in all, we evaluated 2,400 compounds," Herzon said. "Surprisingly, we think that the cardiac glycosides inhibit the retention of a key DNA repair protein known as 53BP1 at the site of DNA double-strand breaks. This is a very interesting activity that was unexpected." Herzon and Bindra said the same approach can be applied to screen hundreds of thousands of compounds. "We are partnering with industry to gain access to their large compound collections. Not only will this help us find new anticancer agents, it can help us elucidate more of the fundamental biology underlying DNA repair," Herzon said. The next step in their research will be to improve the cancer-fighting properties of cardiac glycosides, while modulating their other biological effects. Explore further: Rare byproduct of marine bacteria kills cancer cells by snipping their DNA Scientists designed, created, and tested a chromium (Cr) complex, finding that a novel phosphorus-containing ring structure helps chromium turn dinitrogen and acid into ammonia. This work is part of efforts to develop molecular complexes to control electrons and protons for use in turning renewable energy into storable fuels. Credit: Jonathan Darmon, PNNL Underappreciated compared to its heavier metal counterparts, chromium failed for more than 30 years to turn nitrogen gas into ammonia, a reaction that involves breaking one tough bond and making six new ones. But scientists at the Center for Molecular Electrocatalysis thought chromium was up to the job; it just needed a little support. At the center, one of DOE's Energy Frontier Research Centers (EFRCs), the scientists created a 12-atom ring structure called a ligand that partially surrounds the metal and offers a stable environment for the metal to drive the reaction. By creating this ligand structure, the team demonstrated the importance of the environment supporting chromium. Often a key to controlling metal reactivity, the structure encircling the chromium causes the normally unreactive dinitrogen to become more reactive when it binds to the metal. "This research required the synergy of experimental and computational efforts in an EFRC," said the study's lead Dr. Michael Mock at DOE's Pacific Northwest National Laboratory. "Studying this challenging reaction has benefited from the multiple years of funding that an EFRC enables." The EFRCs are funded by the Office of Basic Energy Sciences at DOE's Office of Science. Producing ammonia for fertilizer consumes vast quantities of energy, an issue that this work may one day help solve. However, this study is focused on another important challenge: storing intermittent wind and solar energy. Solar panels and wind turbines produce electrons that flow along power lines to energize appliances around our homes. But, solar power levels drop when the clouds roll in. What if those electrons could be stored inside a chemical bond, as an energy-dense storage option? This study, which is complemented by two previous reports focused on understanding dinitrogen reactivity with chromium, may someday lead to the development of a system with this common metal as a hard-working catalyst. "This research shows how important it is to move six electrons and six protons in the right order," said Dr. Roger Rousseau, who led the computational studies. "It is rather like herding cats-and very difficult cats at that." There is a long tradition of turning dinitrogen (N2) into ammonia (NH3) using complex molecular catalysts, materials that reduce the roadblocks to make the reaction occur and aren't consumed in the process. Of the metals studied in the column known as group 6 transition metals, chromium supported by phosphorus ligands didn't work. In fact, papers from 1970 to the present day reported failures using chromium even in an environment that was thought to goad it into working. However, Mock and his team focused on the stabilizing effect from the phosphorus atoms of a 12-membered ligand that partially surrounded the chromium metal. Every fourth atom in the ring is a phosphorus atom that forms a bond with the chromium atom. The chemical bonds formed with three phosphorus atoms of the large ring together with two additional phosphorus donor atoms of a second ligand make the chromium atom very electron rich, which then can bind the dinitrogen. Once bound, the dinitrogen triple bond is weakened by coordination to the metal. The team showed that the correct surroundings enhance chromium's ability to bind and activate dinitrogen. In fact, the dinitrogen molecule in this case is more activated than in similar complexes with the heavier metals, molybdenum and tungsten, which have similar properties to chromium. However, breaking the dinitrogen triple bond is still a delicate task. The team found that managing the number of phosphorus atoms and the electron-donating ability of these atoms was crucial. The team ran the reactions with acid at -50°C so that certain intermediate products containing nitrogen-hydrogen bonds didn't fall apart. In these reactions, hydrogen ions from the acid surrounding the complex formed only a small amount of ammonia. They showed that adding acid caused the protons to favor binding with the metal, an unwanted connection. Additional optimization of the chromium complex and the conditions is required to control the formation of the desired nitrogen-hydrogen bonds. The reaction still has secrets to reveal. The team is digging into two of them. First, how do the 12-membered rings that support the chromium form? In the experiments, the rings self-assemble around the chromium. What factors dictate that formation? Also, how can the protons be controlled to prevent them from binding to the electron-rich chromium and form additional bonds with nitrogen? Answering these questions could lead to learning how to control the reaction's environs and lead to a catalyst that is fast, efficient, and long lasting, to convert nitrogen to ammonia. Explore further: Converting Nitrogen to a More Useful Form More information: Michael T. Mock et al. Protonation Studies of a Mono-Dinitrogen Complex of Chromium Supported by a 12-Membered Phosphorus Macrocycle Containing Pendant Amines, Inorganic Chemistry (2015). DOI: 10.1021/acs.inorgchem.5b00351
<urn:uuid:ea6e3073-ac62-4140-a1e5-a39e89eeac91>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-molecular-1399030/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947154
4,102
3.796875
4
ActiveX technology was developed by Microsoft in the mid-1990’s based on the Component Object Model (COM) and Object Linking and Embedding (OLE) technologies. The intent was to provide easily reusable building blocks of programs (active content) via objects whose interfaces can be integrated with other COM objects or programs. Many common applications (including Internet Explorer, Microsoft Office, and Windows Media Player) use it to enhance their feature sets and embed their functionality into other applications. Non-Microsoft applications and websites may also install their own ActiveX controls to provide unique functionality (Adobe Shockwave, for instance). ActiveX controls are typically identified by their class identifier (CLSID), a unique value associated with each control which is referred to as the globally unique identifier (GUID). ActiveX controls are also identified through a program identifier (ProgID), which gives each control a user-friendly name. The ProgID and CLSID relationship is comparable to the interation between an IP address and DNS. A CLSID key exists to provide information used by the default COM handler to return details about a class when it is running. Several public websites list CLSIDs and their accompanying information, including: ActiveX controls are often compared to Java applets because both enable end users to download small programs into their web browsers, which results in more dynamic and interactive web pages. A major difference between ActiveX controls and Java applets is that ActiveX controls are granted higher levels of control over applications. These additional privileges makes them a more attractive target for those individuals looking to perform malicious activities. Adapted from Cisco’s Preventing ActiveX Exploits article.
<urn:uuid:0a8a5954-3957-4d1e-9cb5-7ae46f992f4f>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/01/10/understanding-activex-controls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934802
340
3.125
3
Reusability is a key to any plan to making human life interplanetary, according to the CEO of SpaceX, one of the companies tasked with ferrying cargo, and someday astronauts, to the International Space Station. And the company will take the first step in trying to prove that point this Sunday when it is scheduled to launch a rocket propelling a craft carrying nearly 5,000 pounds of supplies to the space station. This time, unlike the two previous SpaceX trips to the spapce station, the company hopes to recover and hopefully reuse the craft's rocket in another mission. This time, the company will try to recover the rocket launched Sunday from the ocean. Future missions, will use "legs" built onto the rocket to gently fall to land. SpaceX's Falcon 9 rocket plans to make the first stage (shown here) of its Falcon 9 rocket, reusable by recovering it from the ocean on Sunday and later on land using the 'legs' on each side. (Photo: SpaceX) "If one can figure out how to effectively reuse rockets just like airplanes, the cost of access to space will be reduced by as much as a factor of 100," said Elon Musk, CEO of SpaceX, in a statement. "A fully reusable vehicle has never been done before. That really is the fundamental breakthrough needed to revolutionize access to space." The company noted that its Falcon 9 rocket was built at a cost of about $54 million. "The majority of the launch cost comes from building the rocket, which flies only once," the company said. "Compare that to a commercial airliner. Each new plane costs about the same as Falcon 9, but can fly multiple times per day, and conduct tens of thousands of flights over its lifetime." The SpaceX-3 launch is set for 4:41 a.m. ET Sunday from Cape Canaveral Air Force Station in Florida. NASA noted today that there is a 70% chance of the weather being favorable. It did note that thick cloud cover may be an issue. The Dragon cargo craft will bring 4,969 pounds of cargo to the orbiting laboratory and returning 3,578 pounds to Earth. The cargo being ferried to the space station includes computer hardware, scientific experiments and new spacewalk tools. SpaceX made its first resupply mission in 2012 and the second last spring. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "SpaceX, NASA launching reusable rocket" was originally published by Computerworld.
<urn:uuid:f28da607-064a-47e0-896f-3d1ad1b30953>
CC-MAIN-2017-04
http://www.networkworld.com/article/2175142/data-center/spacex--nasa-launching-reusable-rocket.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946259
578
2.953125
3
With the advent of Agile frameworks , the need to improve speed to market through faster testing has emerged; leading to a greater adoption of Model Based testing. Model based testing entails generation of test scenarios using system models as inputs. Examples of models include flow charts, data flow diagrams, decision tables, process flows, UML’s and state machines. Benefits include increased speed to market, improved test overage and improved quality. Here is my list of five model use cases which leverage this technique: - Use case 1: Time partition test case design for embedded and automotive systems: In this technique, test cases are designed using special state machines as inputs.These state machines represent the continuous behavior of systems. Test scenarios for continuous behaviors are modeled based on inputs and outputs of these state machines. A simplistic example of this is in testing embedded devices and deploying scenarios such as turning all lights off when all inputs are zero, one light illuminates when input is 0, 1 and so and so forth. - Use case 2: Regression suites for packaged products such as SAP, Oracle and Salesforce: In this use case, business process models, such as flow charts created in the blue printing phase, serve as inputs. Test scenarios are generated by using algorithms such as condition statements, branches or path coverage. - Use case 3: Test Data design: In this scenario, classification trees are used as inputs for combinatorial types of tests data. The advantage of using this technique, is that it reduces the number of test cases – the basis of pairwise testing. This can also be leveraged for random test case design. For example, creating a test data service, which supplies the tests with data when required. In this scenario, test data is specified by properties which a service invokes at run time based on the combinations. For example, this would work very well for random creation of currencies and real time conversion rates for a global banking roll out. - Use case 4: Agile testing: Two techniques are most useful in this scenario. The first one involves using domain specific language, which developers use as a specification for the implementation and testers use as inputs to generate testable criteria. The second one is state diagrams to describe flows of events in user stories, which helps in building system integration testing scenarios. - Use case 5: Service Oriented Architecture (SOA) testing: The UML format serves as an excellent mechanism to determine interfaces, input parameters and output parameters. UML state machines determine states of an object during the execution of a process, hence enabling effective test scenario generation. Model based testing necessitates the testing community to build new skills. It requires abstract thinking, understanding the requirements and architecture process. Start small with one pilot and very soon you will never want to design test scenarios any other way.
<urn:uuid:e297b172-74e5-474d-906e-1ba102cbfd4f>
CC-MAIN-2017-04
https://www.capgemini.com/blog/capping-it-off/2016/12/model-based-testing-primer-for-beginners
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900622
572
3.203125
3
The data center industry is constantly evolving, in accordance with and sometimes even exceeding Moore’s Law, the infamous prediction that capabilities will double every two years. One of the biggest cruxes of big data is speed: the faster the connection, the better the service. Increased demand and new technology are driving data centers to adopt new 40 Gbps and 100 Gbps Ethernet connections for their internal infrastructure. Green House Data aims to include 100 Gbps cabling in a new Cheyenne expansion, opening in the next 6 – 12 months. How will this new speed standard impact the data business? The general trend in data centers is towards virtualization, in which multiple virtual servers run on a single piece of hardware. This helps maximize efficiency as the masses demand their streaming video, mobile computing and other data-intensive operations. As more of the world comes online and more businesses turn to the cloud for their infrastructure, data demand will continue to skyrocket. Virtual servers save computing resources, but not network resources. As virtual servers increase, faster networks are necessary to keep up with demand. While some of this increased load can be mitigated by cleaning up inefficient routing paths or adjusting drivers, the amount of data will only continue to increase, necessitating new hardware and faster connections. Currently 10 Gbps are the fastest Ethernet connections in wide use. For perspective, most homes and businesses connect to Ethernet with a Category 5 Twisted Pair cable, which can transmit up to 1 Gbps. Data centers are beginning to adopt IEEE 802.3ba, the new standard for 40 Gbps and 100 Gbps connections–40 and 100x times faster, respectively, than the twisted pair cable mentioned above. These new connections will dramatically raise data center capacity. Fiber optic lines transfer data by translating bits and bytes into literal flashes of light, which bounce their way down a transfer cable. In a data center, the external connections terminate in racks with connections to internal routers, which direct information to servers. These internal connections carry vast amounts of information on fiber optic lines. The new 802.3ba standard allows for multiple 10Gbps channels to be run in parallel or wavelength division multiplexing (WDM), depending on whether they are single or multimode fiber (MMF) cables. Basically, the 10Gbps capacity is stacked to become 4x or 10x faster. In most cases, MMF cables are used to provide the additional fiber strands needed to achieve 40 – 100 Gbps connections. This is called Multilane Distribution (MLD), consisting of parallel links or lanes. An MMF cable allows multiple wavelengths of light to travel down its path because of a larger core diameter. It can be used with cheaper electronics and broadcast methods and it often easier to implement. Single-mode optical fiber (SMF) is designed to carry a single ray of light and is much narrower. Single-mode can be more efficient, because there are fewer opportunities for the data to slow down from dispersion or other factors. Wavelength division multiplexing splits multiple wavelengths into separate fibers for single-mode transfer. This allows more data to be transferred on a single cable by using different colors, or wavelengths, of the light for different pieces of information. Specialized equipment—a multiplexer and demultiplexer, placed at either end of the cable—joins or splits this mixed-light signal. Older networks can easily be upgraded to faster speeds through WDM. Green House Data was recently approved for a managed data center services grant from the Wyoming Business Council, a portion of which is earmarked for 100Gbps circuits. Luckily, existing infrastructure can be modified for 100Gbps function with added cabling and equipment. This added equipment is no small investment (each 40Gbps port on a switch could cost as much as $2,500), but at least existing SMF or high-speed MMF (if rated at OM3 or OM4) cables can be reused. Additional ribbons and either new 24-Fiber or additional stacked 12-fiber connectors may be necessary as well. Deployment of 40 and 100 Gbps Ethernet links within data centers has mostly started on small chunks where traffic is heaviest, or from rack to rack within the center. There are only a handful of data center providers with 100 Gbps installed today, but with demand increasing more and more rapidly, the migration is inevitable. Read more about 100 Gbps networking: Posted By: Joe Kozlowicz
<urn:uuid:65f22679-4e70-464f-817d-be843834f198>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/the-100gbps-gorilla-new-connection-speeds-hit-data-centers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914685
908
2.578125
3
Kaspersky Lab has released an article entitled “The Perils of the Internet” authored by malware analyst Eugene Aseev. As the title suggests, this article looks at the threats which make web surfing risky. Today, attacks launched over the Internet are both the most numerous type of threat, as well as the most dangerous. In Q2 2010 Kaspersky Lab products blocked over 157 million attempts to infect computers via the Internet. This article answers questions such as “How does a computer get infected during web surfing?” and “Who profits from Internet attacks?” Attacks via the Internet usually have two steps: redirecting a user to a malicious resource, and downloading a malicious executable file onto his/ her computer. Cybercriminals have two choices: cause the user to download the program by him/ herself, or conduct a drive-by download. In the first case, cybercriminals resort to spam, flashy banners and “black hat” search engine optimization. In a drive-by attack, a computer can be infected without any user involvement, and without the user noticing anything untoward. Most drive-by attacks are launched from infected legitimate resources. As a rule, drive-by attacks do not entail persuading a user to visit a particular site; the user will come across the site as part of his regular routine. Such a site might be, for example, a legitimate (but infected) news website, or an online shop. One of the most common methods used to launch drive-by attacks today is by using exploit packs that exploit vulnerabilities in legitimate software programs running on the victim machine. Today, exploit packs represent the evolutionary peak of drive-by attacks, and are regularly modified and updated to include exploits for newly identified vulnerabilities. Everyone involved in Internet attacks - from the owners of web-resources which host banners to those who participate in affiliate programs - make money from innocent users, by using their money, personal information, computing power etc. “In order to protect yourself, you need to update your software regularly, especially software that works in tandem with your web browser,” says the author. “A security solution which is kept up-to-date also plays an important role. And, most importantly, you should always be cautious regarding information which is spread via the Internet.” The full version of “The Perils of the Internet” is available at www.securelist.com/en.
<urn:uuid:7c84b2d4-0474-490b-9b73-2cddaa0c22a8>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2010/The_Perils_of_the_Internet_who_profits_from_millions_of_online_attacks_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00406-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943335
507
3.109375
3
TAP (Traffic Access Point)Noun 1: A passive hardware device that provides ‘access’ to optical traffic at a specific point in a network. 1: The act of installing a TAP. Typically used in conjunction with other phrases like “best practice” that refers to the standardized deployment of TAPs. As in: “We TAP as a best practice to ensure access to the optical traffic in our environment.” Deploying TAPs in the optical networks of the modern data center is a topic of heated debate. TAPs have been common in the TCP/IP world for more than a decade, where they are deployed to provide network access to packet brokers, intrusion detection and security solutions, and performance management solutions. However, the use of TAPs in optical storage infrastructures is much less common and still retains a stigma despite being deployed in some of the largest companies around the world. Although their use is recognized as a best practice by industry leaders and analysts alike, the full understanding of the benefits that TAPs enable has not yet reached the same level of maturity it has in the broader networking world. Storage administrators often respond with pure, unbridled joy when they see the data that a TAP-enabled monitoring solution can deliver. Physical infrastructure teams, on the other hand, may respond to the idea of TAPs with ambivalence or even apprehension. The level of mixed feelings surrounding TAPs in the storage space baffles me, especially given how well accepted it is in the general networking space. What is a TAP? A TAP is a simple device that mirrors the optical signal traveling across a fiber cable. They are completely passive and do not cause latency or degrade the quality of the optical signal. TAPs are enablers. With TAPs in place, IT can deploy out-of-band monitoring and diagnostic devices and have complete visibility into the actual, line-rate traffic on the SAN. Customers often ask why this is important and the answer is simple. The light doesn’t lie. TAP-enabled monitoring solutions have direct access to the protocol level and the data. There is no device management tool acting as a middle-man trying to interpret what is happening on the physical layer. Until the combination of TAPs and TAP-enabled monitoring solutions emerged as a best-practice, SANs for most organization were impenetrable black boxes—and often blamed for most, if not all performance problems. TAP into the light To TAP or Not to TAP is really no question at all because the benefits far outweigh the perceived challenges. Tapping the storage infrastructure when it’s deployed is similar to installing a fire hydrant when your house is built, not when you have a fire. With TAPs in place IT can successfully do everything from firefighting issues to optimizing performance. When a SAN has an emergency, administrators can plug in diagnostic tools without disrupting data flow and quickly discover the root cause. When persistent, ongoing monitoring becomes a requirement, administrators can easily deploy monitoring solutions to gain immediate insight into the actual problems before they result in a noticeable impact to the system. When migrating or consolidating data center components the TAP, together with a monitoring solution, enables IT to establish highly accurate baselines for application performance before the change, monitor throughout the move and then optimize the new systems for maximum performance, availability and utilization going forward. A simple TAP The most often heard argument against tapping the storage infrastructure is the installation process. Installing TAPs when new SAN infrastructure is deployed or prior to production is the best practice. That guarantees that the infrastructure will not be disturbed when troubleshooting or optimization is required. While that’s the ideal scenario, we all recognize that’s not always reality. In some instances, IT needs to tap existing storage infrastructures. In that case, effective change management processes and policies are essential. TAPs can be installed at times of low activity and traffic can proactively be moved to another route so there is no application slowdown or outage during installation. With proper planning, the actual act of tapping the infrastructure takes less than five minutes. The combination of TAPs and TAP-enabled monitoring solutions are increasingly seen as the optimal method for assuring the performance of mission-critical IT environments. The benefits are clear: faster root cause identification and remediation, the ability to de-risk consolidations and migrations and, ultimately, the ability to optimize performance.
<urn:uuid:fb5197ed-2738-4b5c-bdc6-d3cdd0f8cb39>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474657/data-storage-solutions/tap-into-the-light.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933207
935
3.109375
3
A article in Medical Daily reports that researchers at the University of California, San Diego have devised a computational technique that allow them to reduce X-ray radiation doses by a factor of ten or more for tumor analysis. The approach uses GPUs — NVIDIA Tesla C1060 GPUs in this case — to reconstruct an accurate image of a tumor with fewer CT scans. CT scans are used to generate the image of tumors prior to cancer treatment — image-guided radiation therapy (IGRT). The problem is that repeated CT scans during a therapy regime raises the cumulative radiation dose, which worries physicians and patients. Reducing the X-ray projections, in both number and strength, can reduce exposure, but the images produced need compute-intensive reconstruction to produce an accurate picture of the tumor. Since the CT scanning is done during treatment setup, you need fast turnaround. That’s where the GPUs come in. With only 20 to 40 total number of X-ray projections and 0.1 mAs per projection, the team achieved images clear enough for image-guided radiation therapy. The reconstruction time ranged from 77 to 130 seconds on an NVIDIA Tesla C1060 GPU card, depending on the number of projections –– an estimated 100 times faster than similar iterative reconstruction approaches.
<urn:uuid:c07ea8c5-e3c7-42ad-bc80-21ee809780d8>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/08/02/gpgpu_approach_lowers_x-ray_exposure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00241-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909198
253
3.234375
3
Using convoluted language in explanations to law makers, the NSA has blurred the lines of legality, conducting mass surveillance and network intrusions with impunity. Now, a new report from Der Spiegel, citing leaked documents, examines how the NSA gets the job done. On Sunday, Der Spiegel published a series of articles focused on TAO, the NSA's Tailored Access Operations unit. Formed in 1997, TAO has hacked 258 targets in nearly every country in the world by using software flaws, intercepted data transmissions, or hardware implants. According to the documents, the NSA uses traditional intercept technologies, including base stations to capture mobile phone transmissions, or mobile Wi-Fi tools that enable injection attacks, in addition to passive data collected over the wire. One such example of passive collection comes from error reports generated by Microsoft's Windows operating system. According to Der Spiegel, TAO collects the Windows error reports, which are largely transmitted in clear text, and uses the information in them to assess the vulnerability of a given target; including the presence of vulnerable third-party software. From the article: "The automated crash reports are a "neat way" to gain "passive access" to a machine, the presentation continues. Passive access means that, initially, only data the computer sends out into the Internet is captured and saved, but the computer itself is not yet manipulated. Still, even this passive access to error messages provides valuable insights into problems with a targeted person's computer and, thus, information on security holes that might be exploitable for planting malware or spyware on the unwitting victim's computer." Interestingly enough, the same day that the Der Spiegel report was published, security vendor Websense outlined a talk that is to be given at the RSA Conference in February, which focuses on this exact topic: "One troubling thing we observed is Windows Error Reporting (a.k.a. Dr. Watson) predominantly sends out its crash logs in the clear. These error logs could ultimately allow eavesdroppers to map out vulnerable endpoints and gain a foothold within the network for more advanced penetration..." Another revelation from the Der Spiegel article, in the sense that it exposed additional information to the public, is the fact that the NSA has persistent backdoors into several types of networking equipment, including gear sold by HP, Juniper, Dell, Cisco, and China's Huawei. If needed, the NSA will intercept hardware deliveries and make the required hardware or firmware modifications in order to obtain access. While the documents reference products from six years ago, it's unlikely that newer hardware and firmware are out of reach for the TAO unit. - HP ProLiant 380DL G5 servers (hardware implant) - Dell PowerEdge 1850 / 2850 / 1950 / 2950 RAID servers with BIOS versions A02, A05, A06, 1.1.0, 1.2.0, or 1.3.7 (BIOS exploits) - Dell PowerEdge 1950 / 2950 servers (hardware implant, JTAG interface) - Huawei Eudemon 200, 500, and 100 series firewalls (installed as a boot ROM upgrade). - Moreover, the document says that Huawei routers are targeted, as part of a joint operation between the NSA and the CIA to exploit Huawei equipment (project: TURBOPANDA). - Juniper Netscreen ns5xt, ns25, ns50, ns200, ns500, and ISG 1000 firewalls - Juniper SSG 500 and SSG 300 firewalls (320M, 350M, 520, 550, 520M, 550M). - JUNOS (Juniper's customized version of FreeBSD) on all J-Series, M-Series, T-Series routers - Cisco Pix and ASA (Adaptive Security Appliance) firewalls, 5505, 5510, 5540, 5550 (firmware implant) Cisco, in a statement, said they are concerned about the claims made by the NSA in the published documents, and are reaching out to Der Spiegel in order to obtain more information. John Stewart, Cisco's Chief Security Officer, blogged: "We are deeply concerned with anything that may impact the integrity of our products or our customers’ networks and continue to seek additional information... At this time, we do not know of any new product vulnerabilities, and will continue to pursue all avenues to determine if we need to address any new issues. If we learn of a security weakness in any of our products, we will immediately address it. As we have stated prior, and communicated to Der Spiegel, we do not work with any government to weaken our products for exploitation, nor to implement any so-called security ‘back doors’ in our products." I've reached out to all of the named vendors in the hope they would offer their reaction and address the NSA's claims. However, due to the holidays, many of their press representatives were out of the office. So I'll update this post if there's any response. In the meantime, if you're interested in learning more about the TAO unit, the Der Spiegel reports are worth reading. Juniper has responded with the following: Juniper Networks recently became aware of, and is currently investigating, alleged security compromises of technology products dated from 2008 and made by a number of companies, including Juniper. We take allegations of this nature very seriously and are working actively to address any possible exploit paths. As a company that consistently operates with the highest of ethical standards, we are committed to maintaining the integrity and security of our products. We are also committed to the responsible disclosure of security vulnerabilities, and if necessary, will work closely with customers to implement any mitigation steps. To further add, Juniper Networks is not aware of any so-called "BIOS implants" in our products and has not assisted any organization or individual in the creation of such implants. Juniper maintains a Secure Development Lifecycle, and it is against Juniper policy to intentionally include "backdoors" that would potentially compromise our products or put our customers at risk. Huawei sent the following: We have read the recent media reports and we have noted the references to Huawei and a number of our ICT peers. As we have said in the past, and as the media reports seem to validate, threats to network and data integrity can come from any and many sources. While the security assurance programs we have in place are designed to deter and detect such malicious activity, we will conduct appropriate audits to determine if any compromise has taken place and to implement and communicate any fixes as necessary. HP responds with a statement: HP was not aware of any of the information presented in the Der Spiegel article, and we have no reason to believe that the HP ProLiant G5 server mentioned was ever compromised as suggested in the article. HP's privacy and security policies are quite clear; we do not knowingly develop products to include security vulnerabilities. We are also active in testing and updating our products regularly to eliminate threats and make our products more secure. HP takes the privacy and security of our customer information with great seriousness. We will continue to put in place measures to keep our customers' information confidential and secure. Dell issued a statement on their blog: "... Dell has a long-standing commitment to design, build and ship secure products and quickly address instances when issues are discovered. Our highest priority is the protection of customer data and information, which is reflected in our robust and comprehensive privacy and information security program and policies. We take very seriously any issues that may impact the integrity of our products or customer security and privacy. Should we become aware of a possible vulnerability in any of Dell’s products we will communicate with our customers in a transparent manner as we have done in the past. "Dell does not work with any government – United States or otherwise – to compromise our products to make them potentially vulnerable for exploit. This includes ‘software implants’ or so-called ‘backdoors’ for any purpose whatsoever."
<urn:uuid:f19d912a-805e-4d9f-abf0-32ab7a3198de>
CC-MAIN-2017-04
http://www.csoonline.com/article/2136978/network-security/report-shines-new-light-on-the-nsa-s-hacking-elite.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935865
1,650
2.578125
3
A snake has been moving through the pipes and systems of a nuclear power plant near Vienna, Austria. It may not be as creepy as you think. This particular snake is multi-jointed robotic machine. Carnegie Mellon University's robotic snake climbs a 1-in. machinery cord at the Zwentendorf nuclear power plant in Austria. (Photo: Carnegie Mellon) The robot, which is 37 inches long and two inches in diameter, is tethered to a control and power cable. The robot crawled through the Zwentendorf nuclear power plant's steam pipes and connecting vessels as a test of its abilities. The robotic snake proved it was able to maneuver through multiple bends, slip through open valves and negotiate vessels with multiple openings, according to researchers at Carnegie Mellon University's Robotics Institute, where it was developed. That means the robot can inspect areas of the power plant that previously had been unreachable. "Our robot can go places people can't, particularly in areas of power plants that are radioactively contaminated," said Howie Choset , a robotics professor at Carnegie Mellon. "It can go up and around multiple bends, something you can't do with a conventional borescope, a flexible tube that can only be pushed through a pipe like a wet noodle." The robot, which also has been tested in search-and-rescue environments, is made up of 16 modules, each with two half-joints that connect with corresponding half-joints on adjoining modules. It also has 16 degrees of freedom, enabling it to assume a number of configurations and to move using a variety of gaits. The robot has a video camera and LED light attached to its head, giving its controllers an image of what it's approaching. The university explained that even though the robotic snake is twisting, turning and rotating as it moves through pipes and over obstacles, the image remains steady because it's automatically corrected to be aligned with gravity. The university's robotic research team sent the snake into a variety of pipes at the power plant, which was built in the 1970s but never used. Since it doesn't have any radioactive contamination, the plant was ideal for testing the robot. Nuclear power plants in general have miles of pipes for carrying water and steam. Much of that piping is difficult or nearly impossible to inspect because of its positioning and because radioactivity limits people from being in specific areas. Kevin Lipkin, senior systems engineer at the Robotics Institute, said in a statement that the longest deployment in a pipe during the Zwentendorf testing was 60 feet. "We could have gone farther, but we need to figure out how to best manage longer deployments," he said. "We were just being cautious because it was our first time in this plant." Carnegie Mellon scientists aren't the only ones who have been working on robotic snakes. In 2008, the Sintef Group, a research company based in Trondheim, Norway, announced that it had designed its own robotic snake. Sintef's robotic snakes, were 1.5-meters long and made of aluminum. They were designed to inspect and clean complicated industrial pipe systems that are typically narrow and inaccessible to humans. These robots also had multiple joints to enable them to twist vertically and climb up through pipe systems to locate leaks in water systems, inspect oil and gas pipelines and clean ventilation systems. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Robotic snake ssslithers through nuclear plant" was originally published by Computerworld.
<urn:uuid:0d3e660c-a10d-4dad-a4f1-e8f2b1e9e95f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2168012/data-center/robotic-snake-ssslithers-through-nuclear-plant.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00177-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969612
790
3.640625
4
Having discussed the big picture when it comes to Link-State protocols, let’s now take a more detailed look. With IP there are two Link-State protocols in use: - OSPF: Open Shortest Path First - IS-IS: Intermediate System to Intermediate System Both work in pretty much the same way, but because it’s more commonly used, for now we’ll focus on OSPF. In an OSPF router, the topology database is referred to as the Link State Database (LSDB). The LSDB contains the detailed information that a router has regarding the topology of the internetwork. Each entry in the LSDB is called a Link State Advertisement (LSA). In OSPF, there are five packet types used: - Hello: Used to build and maintain neighbor relationships. - DBD: Database Description, a summary of the LSDB. - LSR: Link State Request, used to request one or more missing LSAs. - LSU: Link State Update, used to send one or more LSAs. - LSAck: Link State Acknowledgement, used to acknowledge receipt of a LSU. When a new neighbor is found, an OSPF router’s goal is to synchronize its LSDB with that of the neighbor. By “synchronize” we mean make the LSDBs consistent, so that they contain the same LSAs. To accomplish this, the routers compare LSDBs (“You show me yours, and I’ll show you mine!”) and any LSA that a router is missing it requests from the neighbor. During this process, the neighbors proceed through the following OSPF states: - Down: The start of the neighbor process. - Init: The neighbor relationship is being initialized. - Two-Way: The neighbor relationship has been confirmed. - Exstart: The routers negotiate which sends its DBD first. - Exchange: The DBDs are being exchanged. - Loading: The LSRs and LSUs are being exchanged. - Full: The LSDBs are synchronized. Once the LSDB is complete, a router runs the Shortest Path First (SPF) algorithm, generating its routing table. In OSPF the metric is cost, and SPF finds the lowest cost path to each destination prefix. Note that SPF is where OSPF gets its name: it’s an “Open” (non-proprietary) protocol that uses SPF. The SPF algorithm is commonly referred to as “Dijkstra’s Algorithm”, after Edsger Dijkstra (pronounced Dike-stra), the computer scientist who developed it. After the routing tables have been generated, the network is converged. The goal is to maintain full state (LSDB synchronization) with the neighbors. Let’s say that router R1 senses a topology change, such as gaining or losing a link. It will then update the R1 LSA in its LSDB, flood a copy of the revised R1 LSA to its neighbors, and run SPF to regenerate its routing table. R1’s neighbors will acknowledge receipt of R1’s revised LSA, replace the old R1 LSA in their LSDBs with the new R1 LSA, flood the new R1 LSA to their neighbors, and run SPF. In this manner, the new R1 LSA propagates throughout the internetwork, and as it does it updates the LSDB, triggers SPF, and regenerates the routing table of each router. Once this process is complete (and it typically takes only seconds) routing is again converged. LSUs are sent only when changes occur (that is, they’re triggered updates), so in between LSUs a router sends periodic Hellos to reassure its neighbors of its continued presence. Since Hellos are small, they don’t take much bandwidth or CPU power to process, and can be sent relatively frequently (every few seconds is typical). Because changes are only being advertised when they occur, the result is that Link-State protocols such as OSPF scale better than Distance-Vector protocols like RIP, which periodically flood the entire routing table even in the absence of any changes. Next time, we’ll look at some additional details and enhancements regarding OSPF. Author: Al Friebe
<urn:uuid:e2d1cc62-a6ed-4b3e-8d81-71c3c97c0ae6>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/11/02/ospf-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919506
929
2.890625
3
One of the classic debates in computer science concerns whether artificial intelligence or virtual reality is the more worthwhile pursuit. The advocates of artificial intelligence argue that computers can replace the need for human cognition, and will eventually be able to out-think us. The advocates of virtual reality argue that computer systems augment human intuition more effectively than they replace it, and that a human/machine symbiosis will always be more powerful than machines alone. This debate has considerable relevance for the world of computer security. Many of the systems that we build to protect our networks work automatically to quarantine virus infected files or block attacks, and indeed, automated attacks often happen more quickly than human beings can react. However, sophisticated attackers have proven that they can effectively outsmart our machines. Obfuscated malware avoids detection by anti-virus software while exploits that target 0-day vulnerabilities slip past intrusion detection systems. Perhaps in order to surmount these problems we need to bring people back into the loop on the defensive side. The VizSec Workshop is an international academic conference that explores the intersection of human machine interfaces and cyber security challenges in search of the right balance between automation and human insight. These subjects are particularly interesting to those of us at Lancope, where I work as Director of Security Research. We build systems that enable human operators to better understand what is going on in their computer networks, with the ultimate goal of detecting and analyzing malicious activity that fully automated security systems have missed. For this year’s VizSec Workshop, Lancope prepared some interesting visualizations of malware command and control behavior. The goal is to see if we can visually differentiate certain kinds of malware behavior from legitimate network traffic. The data available from Lancope’s malware research suggests that 85% to 95% of malware samples use TCP port 80 to communicate with their command and control servers. We decided to investigate the other TCP and UDP ports chosen by the remaining samples to see if there are any interesting patterns that emerge. We took a look at the command and control behaviors of a collection of nearly two million unique malware samples that were active between 2010 and 2012. These samples reached out to nearly 150,000 different command and control servers on over 100,000 different TCP and UDP ports. We created heat maps representing the relative popularity of each port. Each pixel in the images we generated represents a single port number, and the color of each pixel represents the number of command and control hosts in our sample set utilizing that port. In order to create an example of legitimate traffic to compare this data against, we monitored a small office network over the course of one month, and collected information about the ports that computers on that network contacted. We generated images out of that data too, and certain distinctions were immediately visible. The command and control ports used by 2 million malware samples. Malware authors seem to prefer to use low port numbers, whereas legitimate software often uses higher ports. In general, popular malware command and control ports were clustered below port 10,000, whereas the density of ports below 10,000 used on the legitimate network was relatively low. The difference is particularly clear for ports below 1024, which is known as the “well known port” range in Internet standards. Our malware samples used 866 “well known” TCP ports, but the legitimate traffic only used 166. On the UDP side, 1018 “well know ports” were used by malware, but only 19 were used on the legitimate network. This suggests that use of unusual ports below 1024 is a behavioral anomaly that might be worth investigating – it could indicate a malware infection. Ports used by a small office network over the course of a month. A similar observation can be made about the use of the so called “ephemeral port range”. TCP and UDP ports above 49,151 are supposed to be dynamically assigned for use by legitimate software applications. This would suggest that they are used transiently. However, many of these ports were used for command and control communications by malware in our sample set. Command and control communications tend to involve consistent communication over the same port. Consistent use of a port above 49,151 is another indicator that could be indicative of a malware infection. One of the strangest features of the malware command and control image that we generated is a set of three diagonal lines of popular ports that stretch through the image. These lines start at port 0, port 36, and port 45, and in all three cases represent sequences of every 257th port from the starting point. We isolated the exclusive use of UDP ports fitting this sequence down to 14 specific malware samples. Due to the unique nature of the pattern of port utilization by these samples, it seems likely that they are all related to each other, in spite of the fact that they communicate with 6 different domain names that have been hosted in 8 different countries, all over the world. It is possible that the same botnet operator is responsible for propagating all of these samples. While there is no end in sight to the debate between advocates of Artificial Intelligence and Human Computer Interaction, it is clear that visualizations of computer network activity can lead to interesting insights for network security professionals. The researchers participating in VizSec are helping to advance the state of the art in this area, and the research they are doing has important applications in the fight against sophisticated computer network attacks.
<urn:uuid:d1e4ef45-3164-4ae7-bcdd-af4a87607e97>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/10/29/visual-investigations-of-botnet-command-and-control-behavior/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944616
1,089
2.71875
3
Pretty cool stuff here. NASA this week said it successfully flew its battery-powered 10 engine drone that can take off like a helicopter and fly like an aircraft. The concept aircraft, known as Greased Lightning or GL-10 could be used for small package delivery, long endurance reconnaissance for agriculture, mapping and other survey applications. A scaled up version could even be used as a four person size personal air vehicle, NASA researchers said. +More on Network World: The most magnificent high-tech flying machines+ NASA said the GL-10 is currently in the design and testing phase. The initial thought was to develop a 20-foot wingspan (6.1 meters) aircraft powered by hybrid diesel/electric engines, but the team started with smaller versions for testing, built by rapid prototyping. "We built 12 prototypes, starting with simple five-pound (2.3 kilograms) foam models and then 25-pound (11.3 kilograms), highly modified fiberglass hobby airplane kits all leading up to the 55-pound (24.9 kilograms), high quality, carbon fiber GL-10 built in our model shop by expert technicians, " said aerospace engineer David North. The remotely piloted plane has a 10-foot wingspan (3.05 meters), eight electric motors on the wings, two electric motors on the tail and weighs a maximum of 62 pounds (28.1 kilograms) at take off. +More on Network World: Graphene is hot, hot, hot+ Greased Lightning had already passed hover tests -- flying like a helicopter -- and now has made a flight performing the transition from vertical to forward "wing-borne" flight. No easy task. A couple of the real world aircraft like the Osprey or Harrier make it look easier than it is. The next step in the GL-10 test program is to try to confirm its aerodynamic efficiency, but first is a stop at the Association for Unmanned Vehicles Systems International 2015 conference in Atlanta May 4-7, NASA stated. Check out these other hot stories:
<urn:uuid:b71622da-4fa1-4d84-9078-18af53afed41>
CC-MAIN-2017-04
http://www.networkworld.com/article/2917822/security0/nasa-shows-off-10-engine-helicopteraircraft-hybrid-drone-video-too.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939482
423
2.71875
3
While U.S. government officials find the current pipeline for cybersecurity talent to be lacking, 82 percent of U.S. millennials say no high school teacher or guidance counselor ever mentioned to them the idea of a career in cybersecurity, according to a survey commissioned by Raytheon and conducted by Zogby Analytics. The survey also found less than one-quarter of young adults aged 18 to 26 believed the career is interesting at all. Also, young men (35 percent) are far more interested than young women (14 percent) in a career in cybersecurity. The survey found many young adults raised on social networking trust technology and are not overly concerned about the threat of online identity theft or of their personal data being stolen. Seventy-five percent of survey respondents said they were confident their friends would only post information about them on the Internet that they are comfortable with and 26 percent said they had never changed their mobile banking password. The Facebook Generation, sometimes referred to as “Generation F,” includes millennials who have grown up using social networking tools such as Twitter, Facebook, LinkedIn and Pinterest. Raytheon found that despite their risky online behavior, many millennials are becoming aware of Internet risks and are taking steps to protect themselves. Eighty-two percent of millennials password-protect their laptop or desktop computer, the survey found, while 61 percent password-protect their mobile phone. Thirty-seven percent of millennials said they had backed up the data on their laptop or desktop in the last month. Key survey findings include: - Eighty-two percent of U.S. millennials say no high school teacher or guidance counselor ever mentioned to them the idea of a career in cybersecurity. - Young men (35 percent) are far more interested than young women (14 percent) in a career in cybersecurity. - Thirty percent of millennials have met someone online who gave them a fake photo, false information about their job or education, or other misleading information about themselves. - Twenty percent have had to ask someone to take down personal information posted about them in the last year. - Forty-eight percent have used a portable storage device for their computer that was given to them by someone else. - Eighty-six percent said it’s important to increase cybersecurity awareness programs in the workforce and in formal education programs. “Today’s millennials are tomorrow’s leaders and their embrace of technology will continue to drive our economy forward,” said Jack Harrington, vice president of Cybersecurity and Special Missions for Raytheon’s Intelligence, Information and Services business. “This survey shows the gaps that exist in teaching personal online security to our youth and in our efforts to inspire the next generation of innovators.”
<urn:uuid:7e2c12b9-ddb2-4867-9d47-16930e2ab1c8>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/10/24/most-young-adults-not-interested-in-a-cybersecurity-career/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00287-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962548
561
2.6875
3
Cloud Computing Services Who uses cloud computing services and why?Corporate and government entities utilize cloud computing services to address a variety of application and infrastructure needs such as CRM, database, compute, and data storage. Unlike a traditional IT environment, where software and hardware are funded up front by department and implemented over a period of months, cloud computing services deliver IT resources in minutes to hours and align costs to actual usage. As a result, organizations have greater agility and can manage expenses more efficiently. Similarly, consumers utilize cloud computing services to simplify application utilization, store, share, and protect content, and enable access from any web-connected device. How cloud computing services work Cloud computing services have several common attributes: - Virtualization- cloud computing utilizes server and storage virtualization extensively to allocate/reallocate resources rapidly - Multi-tenancy -resources are pooled and shared among multiple users to gain economies of scale - Network-access - resources are accessed via web-browser or thin client using a variety of networked devices (computer, tablet, smartphone) - On demand - resources are self-provisioned from an online catalogue of pre-defined configurations - Elastic -resources can scale up or down, automatically - Metering/chargeback -resource usage is tracked and billed based on service arrangement Among the many types of cloud computing services delivered internally or by third party service providers, the most common are: - Software as a Service (SaaS) – software runs on computers owned and managed by the SaaS provider, versus installed and managed on user computers. The software is accessed over the public Internet and generally offered on a monthly or yearly subscription. - Infrastructure as a Service (IaaS) – compute, storage, networking, and other elements (security, tools) are provided by the IaaS provider via public Internet, VPN, or dedicated network connection. Users own and manage operating systems, applications, and information running on the infrastructure and pay by usage. - Platform as a Service (PaaS) – All software and hardware required to build and operate cloud-based applications are provided by the PaaS provider via public Internet, VPN, or dedicated network connection. Users pay by use of the platform and control how applications are utilized throughout their lifecycle. Benefits of cloud computing services Cloud computing services offer numerous benefits to include: - Faster implementation and time to value - Anywhere access to applications and content - Rapid scalability to meet demand - Higher utilization of infrastructure investments - Lower infrastructure, energy, and facility costs - Greater IT staff productivity and across organization - Enhanced security and protection of information assets
<urn:uuid:46982ca8-4ad8-4c92-80b1-7b86b5198a9c>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/cloud-computing-services.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919558
546
2.90625
3
Massoud Amin is director of the University of Minnesota’s Technological Leadership Institute, an organization dedicated to forming connections between engineering, science, business and technology. Amin, a professor of electrical and computer engineering, recently was listed as a mover and a shaker in the smart grid industry by GreenTech Media. He is also chairman of the IEEE Smart Grid Newsletter. In an abridged email interview, Amin discusses the smart grid with Government Technology. Scenarios for a smart grid vary wildly. But a common understanding [is] that in the coming years, electricity will play a much greater role in global society. It is entirely possible that nations, regions and cities that best implement new strategies and infrastructure could reshuffle the world pecking order. It’s very possible that emerging markets could leapfrog other nations in smart grid markets and deployment. |Massoud is reading Make It in America: The Case for Re-Inventing the Economy, by Andrew Liveris, chairman and CEO of the Dow Chemical Company.| How can we have one? Smart grids have the potential to substantially reduce energy consumption and CO2 emissions. In fact CO2 emissions alone could be reduced by 58 percent in 2030, compared to 2005 emissions. Microgrids that localities build to serve campuses, communities and cities will contribute to smart grid sustainable benefits. Microgrids are wonderful examples of the “think globally, act locally” principle. They draw their energy from locally available, preferably renewable resources. They use smart grid technologies to continually monitor customer demand, and they offer innovative pricing and other programs to manage the load and encourage customers to conserve energy. The microgrid ships any excess capacity back into the grid. I’ve mentioned the overloaded grid conditions we have today. Yet the situation is certain to get much worse, especially with the increasingly digital society. Twitter alone puts a demand of 2,500 megawatt hours per week on the grid that didn’t exist before. Because of increasing demand, experts believe that the world’s electricity supply will need to triple by 2050. It needs a self-healing infrastructure to ensure that [the] power grid can continue to operate reliably for businesses and consumers who depend on it. A smart grid that is overlaid with the various sensors, communications, automation and control features that allow it to deal with unforeseen events and minimize their impacts will be resilient and secure. Not only can a self-healing grid avoid or minimize blackouts and associated costs, it can minimize the impacts of deliberate attempts by terrorists or others to sabotage the power grid.
<urn:uuid:cc384bdd-8b25-41e3-aa04-1c90002dbbc7>
CC-MAIN-2017-04
http://www.govtech.com/technology/Massoud-Amin-Smart-Grid.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937912
532
2.734375
3
7.13 What are LEAFs? A LEAF, or Law Enforcement Access Field, is a small piece of ``extra'' cryptographic information that is sent or stored with an encrypted communication to ensure that appropriate Government entities, or other authorized parties, can obtain the plaintext of some communication. For a typical escrowed communication system, a LEAF might be constructed by taking the decryption key for the communication, splitting it into several shares, encrypting each share with a different key escrow agent's public key, and concatenating the encrypted shares together. The term ``LEAF'' originated with the Clipper Chip (see Question 6.2.4 for more information). - 7.1 What is probabilistic encryption? - Contribution Agreements: Draft 1 - Contribution Agreements: Draft 2 - 7.2 What are special signature schemes? - 7.3 What is a blind signature scheme? - Contribution Agreements: Draft 3 - Contribution Agreements: Final - 7.4 What is a designated confirmer signature? - 7.5 What is a fail-stop signature scheme? - 7.6 What is a group signature? - 7.7 What is a one-time signature scheme? - 7.8 What is an undeniable signature scheme? - 7.9 What are on-line/off-line signatures? - 7.10 What is OAEP? - 7.11 What is digital timestamping? - 7.12 What is key recovery? - 7.13 What are LEAFs? - 7.14 What is PSS/PSS-R? - 7.15 What are covert channels? - 7.16 What are proactive security techniques? - 7.17 What is quantum computing? - 7.18 What is quantum cryptography? - 7.19 What is DNA computing? - 7.20 What are biometric techniques? - 7.21 What is tamper-resistant hardware? - 7.22 How are hardware devices made tamper-resistant?
<urn:uuid:e642248e-3234-4176-b84f-311222ed3f72>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-leafs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00068-ip-10-171-10-70.ec2.internal.warc.gz
en
0.889261
438
3.296875
3
Predicting earthquake risks and effects around the world International, open-source modeling effort looks to develop a global view - By Henry Kenyon - Nov 03, 2010 An international consortium is developing an open-source earthquake model that will help planners map high-risk zones and take preventive measures. When it is complete, the software program will tap into a variety of databases, permitting users to create a variety of models and maps, ranging from national or regional assessments of vulnerable areas to street-level studies of individual buildings near fault lines. Such a tool could help at-risk cities and regions institute repairs and construction before succumbing to a Haiti-type disaster. The Global Earthquake Model is a public/private partnership supported by the United Nations, the World Bank, the Organization for Economic Co-operation and Development, individual nations and private firms. GEM grew out of efforts to provide something more dynamic than a mere map of earthquake risk zones, said Josh McKenty, the project's IT manager. Headquartered in Pavia, Italy, the GEM consortium began work in 2009. When it is complete, GEM will provide a scalable software tool that will allow users to calculate earthquake risks worldwide, provide a basis for comparing earthquake risks across regions and borders, permit the estimation of socio-economic impact and cost-benefit analysis of mitigating actions. It is designed to clearly communicate earthquake risks accurately and transparently to allow organizations, institutions and individuals to make decisions about risk mitigation and it will be available to a wide variety of users, from communities to nations. Satellites come to the rescue when ground systems fail Earthquakes are something to tweet about International groups are using the open-source architecture to create the various tools that will make up the software package, McKenty said. The consortium has open calls for proposals out for 10 projects focused on various aspects of GEM. Much of this work is being conducted at a supercomputer cluster linked to GEM’s modeling facility in Zurich. McKenty said that GEM is not necessarily trying to raise the state of the art, because earthquake modeling technology is very good in some regions of the world, but rather to extend the state of the art to cover places where there is little or no earthquake modeling or where such techniques are not very sophisticated. GEM is an open-source and open-data project seeking to create a tool that can access regional data down to the individual household level. When the site is complete, McKenty said that anyone interested in building or remodeling a house would be able to visit the GEM website and call up data about their regional earthquake requirements. Risk modeling is an essential feature of GEM. McKenty said that international groups such as the World Bank need decision-support tools for risk analysis. He noted that risk analysis has become highly refined only in the last several years; GEM seeks to move risk modeling to a higher level. “When you do risk analysis instead of basic hazard analysis, what you end up with is a platform to make intelligent investment decisions,” he said. This capability will allow regional and municipal governments to conduct their own analyses, such as determining how many schools in a region require upgrades to meet seismic standards, and which most need work within existing budget levels. “These kinds of decision-support tools aren’t available in most places in the world, and even where they are available, they’re still a manual process. So we’re looking at addressing socio-economic impact as part of the modeling language,” he said. McKenty, who currently on sabbatical from his job as the chief architect of cloud computing for NASA’s Nebula program, said there are 10 regional GEM programs. He said that most of these regional efforts will use their own hardware. Although they can access the modeling facility in Zurich, but for various reasons such as latency, these groups will want to run their own systems, he said. He added that some regions will want to run their own systems because they may not want to share their data. One of the challenges is to federate these various systems together so that in cases where certain regions do not allow their databases to be fully replicated, they will permit calculations and queries to be performed against it. As an example, he cited the border region between India and Pakistan, which is crossed by fault lines. India and Pakistan each have half of the data for these faults, but they do not share with each other. “Nobody has a complete picture of that region of the world for seismic activity,” he said. McKenty said it is unlikely that both nations will let their seismic data be shared through GEM. But it may be possible to persuade each nation to run their own data infrastructure and that GEM can run a federated model taking advantage of the data sets on each side of the border without sharing the underlying data with each side. This would allow computer-based earthquake risk products that are more accurate than are available today, he said. Individual modelers can take the open GEM system and run it on their own desktop or laptop computers. These individuals can either connect online to the open GEM architecture or they can process it with their own data sets and use it to refine their own models. Work on GEM began 18 months ago. The first public code release is due Jan. 1, 2011. By early next year, McKenty hopes that GEM will begin offering services in its modeling facilities. In March, he hopes to provide socio economic impact tools. He is optimistic about the program’s goals because of its ability to pull in contributors from other projects to include them into the platform. Observing that GEM this is the most complex piece of scientific software that he has worked on, McKenty said that there is probably no comparable program that is trying to build a platform at this scale. The closest model is Google Maps, but he said that this is not close enough because Google only has a geospatial imaging component, whereas GEM collects a variety of other data and runs the information through complex levels of computation to produce layered, specific area maps. “It’s a tremendous piece of software to build,” he said.
<urn:uuid:e1b4e5f4-2179-4ab4-a498-bb8f22970e4c>
CC-MAIN-2017-04
https://gcn.com/articles/2010/11/03/global-earthquake-model.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95396
1,284
2.78125
3
Definition: A list of vertices of a graph where each vertex has an edge from it to the next vertex. Specialization (... is a kind of me.) simple path, shortest path, cycle, Hamiltonian cycle, Euler cycle, alternating path. See also all pairs shortest path, all simple paths. Note: A path is usually assumed to be a simple path, unless otherwise defined. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 29 July 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "path", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 29 July 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/path.html
<urn:uuid:b48d8e6c-8e84-45eb-bb2f-816fb58b4b20>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/path.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.83734
196
2.890625
3
Excerpted from the “MPLS: What is it? What can it do for me?” white paper. Download the complete paper from our Knowledge Center. Traditional routers forward packets by examining the network-layer header (typically the destination IP address), searching for the best matching entry in the routing table, and forwarding the packet through the specified interface to the next hop router. This process is time-consuming and is repeated for each packet at each router along the path. Because no state is maintained from packet to packet, the system is highly scalable, but inefficient. Multiprotocol Label Switching (MPLS) began life as a way for routers to short-cut the process of treating each packet independently. In an MPLS network, the ingress router does a standard lookup and assigns a numeric label to the packet. The MPLS label is assigned by the ingress edge router based on a Forwarding Equivalence Class (FEC) which represents a series of packets to be forwarded in the same manner, over the same path, to the same destination. In basic IP routing, for example, a label is assigned to each target IP network that the MPLS domain knows about. An FEC could also be associated with specific classes of service for QoS processing, or with a VPN. Core routers then examine the label and forward the packet according to the label. All packets with the same label are forwarded the same way. This relieves the core routers of much processing, making the overall network more efficient. The beauty of MPLS is that the label itself has no meaning other than what the software gives it. Because the labels are just numbers, they can be assigned according to any criteria the router software supports. This feature give MPLS an extraordinary ability to support many networking applications. The label can be used to implement any forwarding treatment that comes to mind. By allowing multiple labels to be stacked within a packet, MPLS permits multiple applications, such as QoS and VPNs, to be combined. Once a packet has entered the MPLS domain, the routers use a simple and fast label lookup process to forward the packet to its next hop. In the Cisco implementation, MPLS leverages the Cisco Express Forwarding (CEF) feature to optimize label lookup and forwarding. For MPLS to operate, labels must be generated, stored, and distributed. The Label Distribution Protocol (LDP) handles the generation and distribution of labels for basic IP forwarding and other applications, and the Label Information Base (LIB) stores the labels generated locally and received from LDP neighbors. LDP is generally enabled on each MPLS interface. Other labels, as required by various applications, are generated and distributed through other protocols. For example, Multi-Protocol Border Gateway Protocol (MP-BGP) assigns and distributes labels used for VPN forwarding, and the Resource Reservation Protocol (RSVP) does the same for traffic engineering.
<urn:uuid:da3bd59a-63c8-42ca-9f9a-f016ce9a5e70>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/10/08/what-is-mpls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922303
601
3.140625
3
Spotting a fancy new Tesla on the road might seem novel, but electric cars are nothing new. And, of course, hybrids like the Prius dot the highways. But the emergence of electric cars dates back further than you think. The first ones go back as far as 1880, and they were common into the early 20th century. Thomas Parker, the man behind making the London Underground electric, was the first to create an electric car suitable for production in 1884 using rechargeable batteries. By 1900, only 22 percent of cars were powered by gasoline, while 40 percent were electric and the remaining 38 percent ran on steam. Eventually, improvements in internal combustion engines and the invention of the electric starter made gasoline powered cars a better -- and cheaper -- option. Eventually, the growth of gasoline-powered cars from companies like Ford and General Motors helped lower the prices of these vehicles to almost half the price of their electric counterparts. By the 1930s, gasoline powered cars had taken over the market, with electric cars disappearing from the marketplace. Fast forward to the 1950s. Growing concerns about pollution from gasoline powered cars prompted the Air Pollution Control Act in the U.S. This garnered some interest in electric cars and by the 80s and early 90s, there was increasing pressure and demand for fuel-efficient vehicles with the dream of a zero-emission car at some point in the future. What makes electric cars different from gasoline cars? The most obvious difference between electric and gasoline cars, is that the former uses an electric engine while the latter uses a gasoline engine. Electric cars might look like a typical car on the outside - or not, depending on the brand you choose -- but under the hood they swap out some important parts from their gasoline counterparts. The most notable aspect of an electric car is its battery. Combine the battery with an electric motor and motor's controller, and you've got the basic brains of an electric car. The controller gets its power from the car's battery and sends it along to the motor to get you on your way. Tesla or Edison? There are two different types of motors that can power an electric car: AC or DC. It stands for alternating current and direct current, the former invented by Nikola Tesla, while the latter was an invention from Thomas Edison, and both come with a long, competitive history. The main difference between the two motors boils down to the voltage they need to get going. A DC motor will use 92 to 192 volts while an AC motor runs on 240 volts with a 300 volt battery pack. DC motors are also cheaper, but they suffer from limitations, such as inadequate acceleration and overheating -- sometimes to the point of self-destruction -- when in overdrive. AC motors are easier to implement into cars, and use regenerative braking, which can deliver power back to the battery when you hit the brakes. Why aren't we all in electric cars yet? A lot of the limitations that put electric cars out of favor in the early 1900s still exist today; batteries are too heavy, they take too long to charge, they're too expensive and you can't go very far without stopping to find a place to charge. If you prefer the conspiracy route, we already have the technology to build affordable and efficient electric cars, but the car companies want us to remain dependent on gasoline. That conspiracy theory goes back as far as the L.A. electric subway system, which was allegedly squashed by General Motors in the early 1900s as an attempt to get more people into their vehicles. Whether you want to believe it or not, at the very least, it makes for an interesting alternative background to the history of electric vehicles. The biggest challenge the electric car industry is up against is battery power. According to How Stuff Works, there are six major flaws in lead-acid batteries, the batteries found in electric cars. These flaws include weight, bulk, capacity, charge, lifespan and price. Heavy batteries mean heavier cars, which lowers efficiency and performance, making their high cost and short life span unattractive to car buyers. You will find that plenty of vehicles use a nickel-metal hydride (NiHM) battery, which lives longer than lead-acid batteries, but also has a less efficient charging and discharging method. Another battery option is the Zebra, or sodium, battery; it uses molten chloroaluminate sodium (NaAlCl4). Zebra batteries are nontoxic and can withstand a few thousand charge cycles, but they aren't great in terms of power density and storing power for long-term use. Finally, some electric cars run on a lithium-ion battery, which you might be familiar because they are in most electronics. Now, they're popping up in electric cars, but like the other batteries listed, they have some limitations including a short life cycle, they are somewhat toxic and have a tendency to significantly degrade over time.
<urn:uuid:97ef4e58-749e-4ab8-8ecf-9b2e18cd5023>
CC-MAIN-2017-04
http://www.itnews.com/article/2955025/consumer-technology/electric-cars-their-past-present-and-future.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968128
1,004
3.484375
3
Security Simplified: The Base+Suffix Method for Memorable Strong Passwords It’s the classic problem of having “too many keys”. You have accounts on many different web sites. Some are small and relatively insignificant, from a security point of view, like blogs or shopping sites. Some are large and sensitive, like banking and PayPal accounts. Since unified login mechanisms like OpenID are not yet pervasive, you must remember the usernames and passwords for every single site. This is a truly daunting task. Ideally, you would like to use passwords that are “strong” (i.e. very good, not easily guessable) and different for every site. However, how can you remember each secure and unique password without resorting to a “cheat sheet”? What is a “strong” password? A “strong” password is one that cannot be guessed either by automated means or by someone who knows you and knows all kinds of things about you. To understand what makes a password “unguessable”, lets review the various ways that a malicious person could attempt to “guess” or “crack” your password. Typically, they will, among other things: - Check the list of the most commonly used passwords. These include things like no password, a space for a password, and the words “password”, “admin”, “passcode”, “secret”, and similar ones. - Check combinations of letters that are next to each other on your keyboard, like “qwerty”, “asdfgh”, “34567”, etc. - Check if you are using your username, or some part of it, as your password - Check common swear words - If they know you, check for names, birthdays, anniversaries, phone numbers, maiden names, pets’ names, etc., of yourself or anybody in your immediate family. - Perform a dictionary attack by trying every word in a dictionary. Then try the same words with varied capitalization and with simple numbers appended or prefixed to them, like “apple1”, and “Apple”. Finally, try the words with common numbers substituted for similar looking letters. I.e. “3” for “e”, “1” for “l”, “0” or “o”, etc. This will result in trying things like “app1e”, “passw0rd”, etc. - Perform another dictionary attack using all common first names and last names, possibly with varied capitalization of the first letters. You may be surprised to know that, depending upon the situation and security of the location where the password trying to be guessed is, that the malicious person may be able to try millions of guesses in a very short amount of time. So, in many situations, trying these seemingly endless possibilities is really possible. You may remember the case of Barack Obama’s Twitter account being compromised last month; this was due to a hacker running a program that performed similar password guessing tests on one of the Twitter administrator’s accounts … and discovering her ill-chosen password. So, a strong password is one that cannot be guessed using an automated program using any of these possibilities and which also cannot be guessed by someone who knows you well and tries passwords based upon information related to you. This makes it hard to choose a password that you can remember and even harder to choose many different ones for many different web sites. Why use different passwords for different web sites? Simply put, if you have different passwords for every web site you have an account, if one of these accounts is compromised or stolen, that information cannot be used to login to any of your other accounts. I.e. using different passwords limits the possible collateral damage caused by a someone getting your password. Using separate passwords is actually extremely important, because the possibility of one of your passwords being compromised or at least known by other people, is very very high. Why? Many (even most) web sites keep a copy of your password to their site unencrypted and in plain text in their databases. They do this either to facilitate verifying your password when you login (a poor way to do this, but common), or so that they can give your password if you have lost it (instead of forcing you to reset it), or so that they can use that password for various things within their systems, like performing automated tasks for you. However, as a result, your password is visible to their system operations staff, and possibly even their support staff. It is also visible to anyone else with access to their databases, such as a hacker that might break into their systems. So, you should assume that the people who work for each web site know your username and password to that web site. If they can guess what other web sites you might be logging into, they could maliciously try that password and similar usernames or email addresses to gain access to accounts as you. Not all web sites use login processes secured via SSL If you login to a web site and that login process is not secured via SSL, then your username and password are sent “in the clear” over the Internet to the web site. This is like writing your username and password on a postcard and sending it in the mail … anyone who can see the message being sent can read your sensitive information. This is especially dangerous if you are connecting from a wireless hotspot or other location where you do not trust everyone who may be using the local network. You can tell if you logged in using a secure process once you are logged in, if the URL in the browser starts with “https://” and there may be a little “lock” icon in your browser that indicates a secure connection. If you login to a web site without SSL security, you should assume that some could get your username and password and login there as you, and that they could try to use that information to login to other sites as you too. What common mistakes are made in managing passwords to many sites? Some mistakes are now obvious: - Using the same password for many different sites - Using a password that is easily guessable Some mistakes are less obvious: - Not changing your passwords for a long time. The longer the passwords are the same, the more chance of a compromise - Writing your passwords down on post-it notes or other pieces of paper. Anyone who can see that paper (on your desk, in your wallet, in your drawer, etc.) then has your personal password list! - Saving the passwords in a non-encrypted file on your computer. Anyone who can access your computer (or steal your laptop) can access that file and get your passwords. Even if you use a password-protected file, you must be sure that (a) you use a strong password for that file, and (b) that the password-protection in use is actually good; i.e. old versions of Microsoft Office have useless password protection. If the password is weak or the encryption poor, then a malicious person could easily open the “secure” file. - When you do change your password, you change it to a password that you previously used. Never do that, as someone may know your old passwords. - When you do change your password, you change it to a password that is very similar. Don’t do that either, as someone may try all common variations on your previous password to guess the new one. I.e., changing your password from “Joe2008!” to “Joe2009!” is not a very good change. Back to basics — what are our goals? When trying to juggle logins for a plethora of web sites, we need to: - Make sure we have a different password for every site, - Make sure that the passwords for all of the sites are “strong”. - Make sure that we can easily remember all of these passwords. - Avoid writing all of these passwords down in an insecure manner. - Make it easy to remember your passwords after changing them all. Making strong passwords that are easily remembered This is, on the surface, perhaps the hardest thing to do. Typically, when someone gives suggestions on how to make a “strong” password, you will hear things like: - Use a combination of letters, numbers, and symbols, like “ksjhd7623!#%” - Use both upper and lower case letters. - Make the password as long as possible. - Do not use words from the dictionary or personal information in the password - Use a long sequence of random characters. All of these tips are valid and play an important role in making good, strong passwords. However, taken naively, you will end up with very strong passwords that are impossible to remember, like “slkJfH867234i@#$%#%608j”. You would never guess that one! However, you will never remember it either. You’ll have to write it down somewhere or save it in a file and you will have to look it up every time you need it. Having to look up our passwords all the time will make them too cumbersome, unless the passwords are rarely used. The two-part system for making many strong, easily-remembered passwords. This is not a system that we invented. It has been around awhile and we have no reference as to its origin. Anyway, here is what you do: - Come up with ONE strong, but short password that is not hard to remember, like “J33pers!” We’ll call this your “BASE”. - Then, for every web site that you need to have a separate password for, you construct it by taking the BASE and appending a suffix onto it that is specific to the web site in question. This suffix should be very, very easy to remember. It does not have to be “strong”, but it is good if it is! Lets call this the “SUFFIX”. - The new password is “BASE” + “SUFFIX”. For example, we’ll make a strong BASE by taking a short phrase that we can remember and doctoring it up in a way that we can remember, but which makes it strong: - Pick some phrase like “i feel great”. Multi-word phrases are good starting points for strong passwords, because they are memorable but not easily vulnerable to dictionary attacks. - Add symbols – “i feel great!” - Add numbers by replacing some letters with numbers phonetically- “i feel gr8!” - Use both upper and lower case letters – “I Feel Gr8!” This BASE, “I Feel Gr8!” is relatively short, but strong. It uses letters, numbers, and symbols. It uses upper and lower case letters. It is not derived from a word in the dictionary or from personal information. You can use this site at Microsoft to check the strength of you password. This BASE by itself is a good password, but since we don’t want to use the same password everywhere, we need to generate custom passwords for each of our web site accounts by appending a suffix on to this base. Note, adding more “stuff” onto a password that is already strong only makes it stronger. When making up suffixes, remember to choose suffixes that - Cannot be guessed based on the name of the site your are going to. I.e. don’t use the suffix “amazon” or “amazon.com” for your login to amazon.com! - Cannot be guessed using a dictionary attack. So, lets do some examples to see how it works: - For our example login to Amazon.com, we might use the suffix “kindle the fire” (based on a reference to Amazon’s Kindle ebook reader) to get a password “I Feel Gr8!kindle the fire” - For our example login to our Bank, we might use the suffix “i need money!” to get “I Feel Gr8!i need money!” - For our example login to our Blog, perhaps we use the suffix “no comment!” to get “I Feel Gr8!no comment!” So you see: - It is OK to use spaces in your passwords - Using phrases with punctuation creates suffixes that are easy to remember and secure against dictionary attacks. - The resulting combined passwords are easy to remember, very strong, and very specific to each site. Remembering your password suffixes It is likely that no one will remember all of their suffixes immediately and you will want to protect against forgetting them years later. You can write down a list of the suffixes, or better yet, make a (encrypted) file in which you keep a list of the suffixes and sites (and usernames) they go with. Do not include the BASE in this file. This makes for a cheat sheet that is easy to use and much more secure than your average password list. Without knowing the “BASE”, no one who looks at the cheat sheet actually can use any of the passwords listed. And, since the BASE itself is a strong password, it will not be easily discovered. If you want to save your BASE in a place for safe keeping, be sure to put it somewhere distinct from your suffix cheat sheet (like on paper in your safety deposit box or vault). Changing your passwords Using the BASE+SUFFIX scheme, when you need to change your passwords (as you should do regularly), all you have to do is change the BASE everywhere. You can leave all the suffixes the same. In this way, you get all new strong passwords that are all easy to remember, but only one thing has changed! How does this BASE+SUFFIX method accomplish our goals? 1. Make sure we have a different password for every site The use of the SUFFIX ensures that all sites have different passwords. Using suffixes that are moderately strong and not obvious (to anyone but you) based on the site you are trying to log in to, means that even if someone has the password to one of the sites, which as we mentioned above is very likely, and they know that you are using the BASE+SUFFIX method, it will still not be feasible for them to guess your password to other sites. And, the better your choice of suffix, the greater the security. 2. Make sure that the passwords for all of the sites are “strong” As the BASE part is strong, the BASE+SUFFIX is even stronger. So, all of the passwords are distinct and strong. 3. Make sure that we can easily remember all of these passwords You have one somewhat complex thing to remember, the BASE, but this is created from a phrase that you know. The suffixes are all made up of phrases that should be memorable and related to each site — so the combination is easy to remember, especially after you use it a few times. 4. Avoid writing all of these passwords down in an insecure manner. While you can write down or save the SUFFIX list independently of the BASE, your backup copy of your passwords and sites should be very, very secure. Maybe you don’t need to write them down at all, if you have a good memory. 5. Make it easy to remember your passwords after changing them all As you can change all password by just making a new BASE, it will be easy to remember all of the new ones, as you will already know the suffixes. So, the BASE+SUFFIX method meets all of our goals; however it relies on you to: - Choose a good strong BASE - Make each suffix not guessable based on the web site in question, i.e. strong in and of itself. But, with a little thought, this is not very hard. Actually, it can be kind of fun. Help making a strong BASE password Of course, making up your passwords all by yourself is the most secure thing you can do. However, there are some web sites out there that can get you started: What a strong password doesn’t protect you from Just because your passwords are now all super strong and separate for every site, that doesn’t mean that your accounts are all safe! You must also be aware: - If copies of your passwords are stored on your computer or at your home or office, can anyone else ever gain access to them, and thus your accounts? Think: snoopy people, theft, lock pickers, people looking in your trash, people “fixing” your computer, etc. - Is your computer compromised? If there is a “key logger” installed on your computer either intentionally by someone else, or by the act of a virus, then everything your type is being logged, saved, and viewed by someone. So, no matter what kind of security you may be using, when you type in that password, it is being saved and sent to someone! Be sure that your computer is secured, virus free, phishing software free, and that only you have administrative access. If anyone else has administrative access to your computer, then you never know what software might be running and watching what you do and type. - Be sure to use secured connections (SSL) when connecting to your accounts on web sites. If you do not do this, than anyone eavesdropping on your Internet traffic can see your username and password. If you are in a pubilc wireless hot spot, use a VPN or SSL for everything, as eavesdropping is extraordinarily prevalent in such locations. - If you tell someone your password, assume that they have written it down, told other people, used it in a computer with a key logger, or had it discovered through eavesdropping. You never know. - If you are typing your password in and someone is watching you, they may be able to discern what you typed! Centalized online storage for copies of your passwords LuxSci recommends using a secure online repository for storing copies of all of your usernames and passwords. - The copies are not stored on your computer or on paper anywhere (except maybe in a vault) and this vastly increases your security. - You can access the copies from anywhere, over a secure channel, so you can look up a password if you have forgotten it, no matter where you are located. - You can securely store other information along with these passwords for reference, such as: - User names needed - Web site links - File attachments, like contracts, how-to documentation, reference guides, etc. - If you work with others and have a shared collection of passwords that are used to access various sites, you can share these online with everyone and even specify who has permission to access which passwords. - You should still keep a copy of all of your passwords in a vault somewhere “just in case”. LuxSci’s Passwords WebAide is a secure online password (and related information) storage system that does all of this, including facilitating sharing between users. It encrypts all password data using your own PGP certificates (which you can upload or which we can generate for you), so all of the data is encrypted while stored and is only decryptable by you when you login and supply your certificate password to open the secure password entries.
<urn:uuid:8a513e87-5e64-4064-aaae-89f161cefc28>
CC-MAIN-2017-04
https://luxsci.com/blog/security-simplified-the-basesuffix-method-for-memorable-strong-passwords.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00426-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945021
4,170
3.421875
3
The Republican National Convention is coming up July 18-21, the Democratic National Convention will follow on July 25-28, and the political climate is tense. In particular, politicians and commentators have expressed serious concern about holding the events soon after shootings in Louisiana, Minnesota and Texas, all of which have been met with public outrage. But while these concerns and others surround both candidates, cyber crime also looms over the upcoming 2016 presidential election, presenting a real threat to the outcome of the race. Because we’re in the business of protecting sensitive data and tech resources, we want to explore this issue and raise awareness of the security vulnerabilities that could negatively impact the presidential race. How Hackers Threaten the Election Ethical hacking expert Michael Gregg, writing for Huffington Post, outlines a number of security challenges that call into question the security of the upcoming election. Gregg, referencing a 2015 study by Verizon, points out that the public sector suffers from a higher incidence of crimeware infections than any industry sector. And more specifically relating to election technologies, a 2012 security review by Argonne National Laboratories found that election machines were alarmingly easy to hack. The state of Virginia also placed a ban on touchscreen voting machines in 2015, as the security of these devices has been called into question. By hacking into voting machines or the networks on which they transmit data, cyber criminals could use a number of methods to rig an election. As Gregg’s article points out, the most obvious method would be to alter records on the voting machine itself. However, hackers might also shut down the machines within certain precincts or tamper with records sitting in the government databases. Even beyond election rigging, hackers have a number of options for sabotaging presidential candidates. Through DDOS attacks, they might shut down access to a candidate’s website, or they could vandalize the site by hacking into its servers. As the recent scandal involving Hillary Clinton’s private email server suggests, hackers could also access candidate’s sensitive information. One potential use of this information would be to “dox” the candidate, exposing personal information that could damage their political standing. What Can You Do? Vulnerabilities in the election system have the alarming potential to impact both politicians and the general public’s vote, highlighting the dire need for active cyber security initiatives wherever sensitive data is managed or stored. Even without sizable budgets or sophisticated internal resources, agencies can find solutions for proactive monitoring, reporting, intelligence and response by partnering with a capable managed security partner. Lunarline provides 24/7 managed security to clients in both the private and public sectors to support a complete risk management approach and minimize the threat that cyber criminals pose to your data.
<urn:uuid:df011662-b901-4d25-a0e7-66f747a28b68>
CC-MAIN-2017-04
https://lunarline.com/blog/2016/07/hacking-the-2016-presidential-election/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93385
554
2.609375
3
You don't need an electrical engineering degree to build a robot army. With the $35 Raspberry Pi B+, you can create robots and connected devices on the cheap, with little more than an Internet connection and a bunch of spare time. The Raspberry Pi is a computer about the size of a credit card. The darling of the do-it-yourself electronics crowd, the Pi was originally designed to teach kids computer and programming skills without the need for expensive computer labs. People have used Raspberry Pis for everything from robots to cheap home media centers. + ALSO ON NETWORK WORLD 20 cool things you can do with Raspberry Pi + The Pi sports USB ports, HDMI video, and a host of other peripherals. The latest version, the B+, sports 512MB of RAM and uses a MicroSD card instead of a full-size card. Most people install a Linux distribution called Raspbian onto the SD cards needed to boot the Pi. Raspbian is a version of Debian Linux (the distribution Ubuntu is based on) designed specifically for use on the Pi. Raspbian is also recommended for new Pi users to familiarize themselves with the device and the Linux operating system. If the the big "L-word" scares you, rest easy knowing that Raspbian ships with a familiar graphical environment, complete with a web browser. And you can get your Pi up and running in less time than it takes to bake an edible raspberry pie. Ready? Let's get cooking. Raspbian Raspberry Pi Yield: One web-ready 2.2-inch x 3.4-inch Raspberry Pi. Processing time: about 20 minutes. Prep time: about 20 minutes. Before you start, gather everything you need in one place, preferably near your router. - 1 Raspberry Pi B+, bare - 1 USB mouse - 1 USB keyboard - 1 ethernet cable - 1 monitor with HDMI (preferred) or DVI input - 1 HDMI to DVI adaptor (optional) - 1 USB cable with micro-USB connector (you can borrow this from an Android phone) and wall adapter - 1 8GB MicroSD card with standard SD adapter - Windows PC with SD card reader and Internet connection Win32 Disk Imager should only take about 30 seconds to download on a fast connection, while Raspbian will take about 12 minutes. While files are downloading, combine the mouse, keyboard, HDMI cable and ethernet cable with the Raspberry Pi. Connect other end of ethernet cable with your home router and the other end of the HDMI cable to your monitor. If you chose to use a monitor with DVI only, use the HDMI to DVI adaptor. Combine the micro-USB-tipped USB cable and wall adapter. Combine MicroSD card with SD card adapter. Set aside. Once Win32 Disk Imager is finished downloading, install the software. When Raspbian is done downloading, extract the IMG file to a handy location. Insert the SD card adapter into the PC's SD card reader and start Win32 Disk Imager as an administrator. Click the folder icon to browse for the Raspbian IMG file, click the drop-down menu under Device and select the appropriate drive letter for the SD card. Click Write, and let the program run for about 7 ½ to 8 minutes. When Win32 Disk Imager is finished writing, click OK and Exit. Remove the SD card from your PC and pull out the MicroSD card from the SD card adapter. Insert the MicroSD card into the Raspberry Pi until it clicks securely. Plug USB power cable into the wall and into the Raspberry Pi to boot the computer. When the Pi boots, select the first option to format the remaining memory of the MicroSD card for use as storage. Set your time-zone and keyboard layout. Raspbian is set to use a U.K. language and keyboard for layout, so be sure to set the keyboard and language to your local language. For most people in the U.S., the standard U.S. keyboard layout will work. Once you've configured your options in the setup program, hit Tab and select Finish. On the next screens, select appropriate "compose" keys, which are used to create special characters. I used the right Ctrl and Alt keys as compose keys because I rarely use them. When the setup program finishes, log in to Raspbian with the user name pi, and the password raspberry. Next, type startxto open the LXDE graphical desktop environment. Once the graphical environment starts, you're good to go. While the Pi can handle web applications like Google Apps, don't expect desktop-like performance. Remember that the Pi is running desktop software on really cheap hardware meant for mobile phones. Raspbian comes preloaded with the Midori web browser. To install another browser like Chromium, you'll have to use a couple of commands with console program apt. But first, you'll have to update the list of packages available to apt. Type or paste the following into a console window: sudo apt-get update Next, open up a terminal and type sudo apt-get install <package name> to install the appropriate software package. sudo apt-get install chromium You can use the apt command to install everything from LibreOffice to the Apache web server. A full list of Debian packages available for Raspbian is available online. To turn off your Pi, double-click Shutdown on the desktop. Once the Pi's screen has gone dark and is no longer showing text of any kind, simply unplug the Pi from its USB power supply. Once you feel at home with Raspbian, you can try writing programs for the Raspberry Pi using Python, or try your hand at other distributions like Pidora (a Pi-friendly version of Red Hat's Fedora Linux) or the Raspberry Pi version of Arch Linux. Because the Pi is so cheap, don't be afraid to experiment and break things. If you ever get in a situation where your Raspbian installation is unusable, simply use Win32 Disk Imager to flash a fresh copy of the operating system onto your MicroSD card and start anew. Whether you have a big project in mind or just want to learn how to program in Python, the Pi is a great way to get a taste of what tiny computers can do. This story, "How to set up Raspberry Pi, the little computer you can cook into DIY tech projects" was originally published by PCWorld.
<urn:uuid:4ddc9830-dd7e-437b-9166-fbdb06aed732>
CC-MAIN-2017-04
http://www.networkworld.com/article/2597976/computers/how-to-set-up-raspberry-pi-the-little-computer-you-can-cook-into-diy-tech-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876055
1,354
2.71875
3
DOE launches collaborative platform for energy data Open Energy Information site uses Linked Data approach - By Kathleen Hickey - Dec 16, 2009 The Energy Department is making its energy data widely available to the public via a Linked Open Data platform to enable broader access to data and encourage greater collaboration and transparency. Open Energy Information is based on the same software that runs Wikipedia, and allows users to not only access Energy's data, but also contribute information. “This information platform will allow people across the globe to benefit from the Department of Energy’s clean energy data and technical resources,” said Energy Secretary Steven Chu. “The true potential of this tool will grow with the public’s participation – as they add new data and share their expertise – to ensure that all communities have access to the information they need to broadly deploy the clean energy resources of the future.” Linked Data, part of the emerging, collaborative Semantic Web, is a method of exposing, sharing and connecting data via Uniform Resource Identifiers. Using a common framework, data can be shared across applications, enterprises and community boundaries using Resource Description Framework specifications. Thus, users can search for information across applications and in various locations to relate and query information in new ways. Energy anticipates that the site will be used by government officials, the private sector, project developers and the international community to promote clean energy technologies nationally and globally. In the future, the agency intends to expand the portal to include online training and technical expert networks. The site, launched as part of a broader effort to improve the agency’s data transparency and collaborative efforts, follows guidelines set by the White House’s Open Government Initiative. Energy worked with the National Renewable Energy Laboratory and other national laboratories to develop and populate the site, which includes more than 60 clean energy resources and data sets, including maps of worldwide solar and wind potential, information on climate zones, and best practices. OpenEI.org also links to the Virtual Information Bridge to Energy, a data analysis hub that will provide a dynamic portal for better understanding energy data. NREL will continue to develop, monitor and maintain both sites. Simultaneously, Chu announced that the agency is contributing various tools and data sets for Data.gov’s National Assets program. The information is available in RSS and Extensible markup Language feeds. The publicized data is geared toward increasing access to information on publicly funded technologies that are available for license, opportunities for federal funding and partnerships, and potential private-sector partners. Kathleen Hickey is a freelance writer for GCN.
<urn:uuid:464238e6-1f80-4dea-84ff-fd7e6f7ea6b0>
CC-MAIN-2017-04
https://gcn.com/articles/2009/12/16/doe-linked-data-web-site.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00270-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897703
534
2.53125
3
We often hear about the pressure that traditional media is under today. Falling ad revenues, the rise of citizen journalism and reduced attention spans mean that deep investigative journalism is growing increasingly rare. So it is interesting to read a story where seemingly deep research has taken place. That was the case last year when two French journalists from Le Monde received access to a highly complex dataset. The two obtained data detailing over 100,000 clients and related bank accounts at the Swiss branch of HSBC. The data pointed to unethical and fraudulent practices. The problem was that the complexity and mass of the data meant that traditional investigative approaches simply wouldn't work. Traditionally, reporters have to try and spot relationships between data in Excel files, conduct manual Internet searches and sometimes physically draw out connections between people and entities to get the right facts for their stories. It would have taken years for the two to unravel the data. This is where the International Consortium of Investigative Journalists (ICIJ) came in. The ICIJ is a global network of more than 190 investigative journalists in more than 65 countries who work together on in-depth investigative stories. Founded in 1997 by American journalist Chuck Lewis, the ICIJ was launched as a project of the Center for Public Integrity to extend the center’s style of watchdog journalism, focusing on issues like cross-border crime, corruption and the accountability of power. Backed by the center and its computer-assisted reporting specialists, public records experts, fact-checkers and lawyers, ICIJ reporters and editors provide real-time resources and the latest tools and techniques to journalists around the world. The leaked data included information from account holders in over 200 countries and had a collective account total of over $100 billion. The challenge was to find a solution to analyze and visualize that data without the need for data scientists. This is where a graph database solution came in. “While working on stories like Offshore Leaks, I learned how important graph analysis is when investigating financial corruption,” said Mar Cabra, editor of the Data and Research Unit at the ICIJ. “Connections are key to understanding what the real story is: they show you who’s doing business with whom. We decided early on that we needed to use a graph-based approach for the HSBC Leaks.” Cabra re-created the Excel files in a database and connected every name to one or several countries. Finally, the team turned the data into a graph format to explore the connections between nodes. The resulting graph database had more than 275,000 nodes with 400,000 relationships among them. A Web application was used as a user interface to visualize and provide access to the data for reporters. This enabled the journalists to identify the connections between people and bank accounts and, over time, find the connections and instances of fraud, corruption and tax evasion. After Cabra’s team shared the tool on the ICIJ’s virtual newsroom, journalists worldwide tapped into the dataset and the graph analysis tool within their respected regions, querying data on a worldwide scale. By being able to easily visualize the networks around clients and accounts, they found many more connections than they had before, which led to new stories that later made front pages all around the globe. Prior to this, lone reporters had to establish connections by hand with the information of dozens of files, a time-consuming task that could yield inaccurate results. The results of applying these technologies to the raw data speak for themselves — in February 2015, more than 50 news organizations worldwide (including Le Monde) revealed how HSBC had helped criminals, traffickers and tax evaders and profited from doing business with them, by helping shelter over 100,000 clients with accounts worth $100 billion in Switzerland. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:f1f42654-c27a-4121-90ae-3ed8b4ecde22>
CC-MAIN-2017-04
http://www.computerworld.com/article/2979654/data-analytics/using-graph-database-technology-to-unravel-banking-fraud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961031
782
2.5625
3
Primer: Free Space OpticsBy David F. Carr | Posted 2005-03-07 Email Print A wireless network that uses light beams instead of radio waves. What is it? Free space optics technology is wireless networking that uses light beams instead of radio waves; it's laser-based optical networking without the fiber optic cable. In corporate networks, the most common application is wireless campus networkingtypically, a rooftop-to-rooftop connection between buildings. Or the laser beam might be shot out one building's window into the window of a building across the street. What are the advantages? Wavelengths for these transmissions do not require a Federal Communications Commission license. While this is true of some wireless networking schemes based on radio-frequency transmissions, free space optics is immune to the radio interference that can sometimes sabotage those systems. Jeff Orr, a senior product marketing manager at Proxim, says radio-frequency products have erased the bandwidth advantage that free space optics once enjoyed. He concedes that his high-end products cost about twice as much as a free space optics unit with the same capacity (about $75,000 for a gigabit radio link versus $40,000 for an FSO alternative). What are the disadvantages? Transmissions fade rapidly in certain kinds of weatherfog, in particular. Isaac Kim, director of optical transport at MRV Communications, says fog droplets scatter the wavelengths of light used in free space optics. The power of the beam can't be increased because of concerns that the lasers might be dangerous to the eyes of anyone who happens to walk through a souped-up beam. Technical innovation has focused more on lowering costs than increasing the range of the optics equipment. To maximize availability, you can use a combination of free space optics and radio-frequency gear. What's the real range? A conservative estimate is up to 500 meters for a free space optics-only solution, or 1 to 2 kilometers (about a mile) for free space optics with a radio-frequency backup. You can stretch this in dry areas. Any other downsides? Atmospheric effects such as scintillation (the "waves of heat" pattern you sometimes see over dark surfaces) can have an impact on free space optics transmissions. Urban rooftop-to-rooftop setups can encounter problems because tall buildings sway slightly in the wind, throwing off the aim of the tightly focused lasers. Who are the vendors? The most recognizable name is Canon, better known for cameras and copiers. LightPointe is a specialist in this niche, where many startups have long since come and gone. MRV Communications offers free space optics gear as an adjunct to its fiber optic and Ethernet solutions. Who will vouch for it? Fred Murphy, associate director of information technology for Jazz at Lincoln Center, says free space optics equipment proved ideal for connecting the center's new auditorium with its administrative offices. "We were so much within that range that it's ridiculous," he says. "We're literally across a New York City street." By shooting a laser out the window of one building into the window of another, Jazz at Lincoln Center established a network link that it owns, rather than paying the phone company for the bandwidth. Establishing a fiber optic link between the two buildings would have been prohibitively expensive because of the complication of digging up a New York street. Paul Wolf, an engineering technology manager at CDI Business Solutions, was initially a reluctant customer of LightPointe's FSO technology when his company started using it to connect two buildings in Houston. He worried about how many complaints would be waiting for him the first time fog knocked out the link. But the setup failed-over smoothly to a redundant RF link when the light beam was interrupted, and in two years the connection has had zero downtime, he says: "Now, I don't even think about it." Any other downsides? In addition to fog, you could run into problems with atmospheric effects such as scintillation (the "waves of heat" pattern you sometimes see over dark surfaces). Also, urban rooftop-to-rooftop setups sometimes run into problems because tall buildings sway slightly in the wind, throwing off the aim of the tightly focused lasers.
<urn:uuid:923c81c5-5808-4eb7-b6e3-3e6a172f9eda>
CC-MAIN-2017-04
http://www.baselinemag.com/it-management/Primer-Free-Space-Optics
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945541
864
2.78125
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Obesity Select a size “How does obesity affect public health and how can we deal with its medical consequences” Obesity, as we know as the state of being grossly overweight, is one of the major causes of the degenerative diseases diabetes mellitus, osteoarthritis and various types of cancer such as breast cancer and bowel cancer. It is a major problem significantly worldwide and affects an approximation of a quarter of the population of adults in the UK as well as one fifth of children at a young age of 10 to 11 (NHS Choices, 2016). Increased obesity is a result of mass media advertisement of processed food which significantly impacts the mindsets of both adults and children. The levels of obesity show no sign of slowing down, which is a definite result of unhealthy lifestyles, ranging from smoking to lack of daily exercise. Furthermore, advances in technology have resulted in the surplus of individuals remaining indoors, disallowing the proper contraction of our muscles and functioning of our metabolism to break down the food we consume and prevent excess store of fats. Studies show a correlation between excess sitting with type 2 diabetes, obesity, and some forms of cancer. The prevalence of obesity has increased since 1993 rising from 14.9% to 25.6% as recorded from surveys compiled from the health survey of England. Results predict that by 2050, approximately 55% of adults and 25% of children will be obese (Public Health England, 2010). With the increased consumption of unhealthy products and the high prevalence of other major health conditions, obesity has a detrimental impact on the NHS in terms of costs for treatments and prolonged checkups reaching between 10 to 12 billion by 2030 (Press Association, 2014). The poignant reality is obesity has consumed funding of health sectors in the past few decades. This has resulted in the prevention of further research towards other major diseases such as the development of anti-cancer drugs. Specifically looking at obesity within children who are exposed to cheap food high in sugars and fat, there is evidence within research showing a change in physiology on the basis of body weight regulation. The life-expectancy of a child with obesity is significantly lower and there is a higher risk of premature illness and death in adulthood. Treatment towards this childhood obesity epidemic has shown no evidence of a proper treatment which is effective. Children have a slower rate of metabolism and are susceptible to more diseases than an adult would with the same conditions (Ebbelling, Pawlak, and Ludwig, 2002). Taking nutrition into account, specifically the consumption of fruits and vegetable evidence show that consumption of this alone will not aid in the healing of obesity as other factors have to be considered. Obesity is caused by environmental, genetic factors, in addition to our lifestyles. Though fruits and vegetable allow the promotion of growth and adequate intake of calories. Positive evidence shows nutrients absorbed from vegetables such as sulfides has the ability to detoxify carcinogens and stimulate anti-cancer drugs. Antioxidative flavonoids found within both fruits and vegetable acts as protection against myocardial infarction and cancer. Flavonoids also have the ability to inhibit cloths to prevent inflammation those decreasing the possibility of stroke and hypertension (Duyn and Pivonka, 2000). Other factors to take into account is a change in diet with increased consumption of Cis fats in comparison to trans fats which are the unsaturated fats. The consumption of these whats causes an increase deposition of cholesterol in the blood stream, leading to increasing body mass, and atherosclerosis. This can be easily avoided by consumption of fat-free products and avoiding products that list partially hydrogenated fat or oil on the label. Those who may be affected by obesity due to their genetic traits may find losing weight difficult but it is definitely not impossible. Obesity is more of an environmental factor which can be tackled effectively as long as you avoid poor eating habits.Having mental support by joining groups aiming to lose weight increases motivation and helps increase public health gradually, which eventually will lead to a decrease in obesity. Other obesity-related problems include a range of difficulty to day to day activity and lifestyles. Problems vary from breathlessness and increased sweating to joint and back pain. These minor problems have an impact on the individual as they may start to feel isolated from family and friends, those leading to depression and increased consumption of food once more. When looking towards a solution to obesity we need to take into consideration of daily intake and consumption and ask the question why do we exceed the value knowing the fact that it is detrimental to our health? Scientific research has shown that the orbitofrontal cortex is activated when we eat fatty food, which is associated with our taste sensors (BBC, 2014). One major method to eradicate obesity is through education and informing parents of a good healthy diet for the benefits fo their children in order for a better quality of life. Finding ways to a community is a slow but sure step in reducing obesity as well increasing the activity of exercise for the population. By informing the population we can work hand in hand to increase the production of healthy meals in restaurants and supermarkets. Furthermore, aid in providing good school meals for children and encouraging the joining of school activities during and after school to increase fitness levels. Consistency is key to aid in preventing and reducing obesity. Where it is easy to start to eat unhealthy foods it is twice as heard to maintain a healthy lifestyle, especially with what we are exposed to on a daily basis (Staff, 2015). For those who doctors may define their patients as morbidly obese, undergo a gastric band operation. It is surgery were a gastric band(tube-like balloon) is inserted into the upper part of the stomach. A surgeon carrying out the procedure will then fill the tube with a liquid in order to create a pouch which will fill up fairly quickly with consumption of food, thus resulting in the contraction of the stomach sending a balance of hormonal and neurological signals to the brain giving you the feeling of being full more quickly. As a result, the patient will potentially eat less showing an average of losing between a half and two-thirds of excess body mass within 1 year after the insertion of the gastric band (Roizman, 2016). Other forms of Surgery commonly used for weight loss in the UK is gastric bypass surgery. Also, a method used for the morbidly obese and specifically focus on the reducing the size of the stomach as well as bypassing parts of the intestine in order to absorb fewer calories. Procedure aids in the loss of 65% of excess weight within two years. Benefits of using this surgery are the ability to maintain the weight loss for at least 10 years. Though it is vitally important to change lifestyles and ensure you exercise regularly. However, for the side-effects, both operation have some that are temporary, which is vomiting after eating excessively which will change as the overall consumption of food is now a lot lower. Eventually, the side-effects will fade out over time as the patients start to eat less and become aware of their limitations in the amount they can consume daily (Bupa, 2014). In conclusion, the attempt to decrease the increasing rise in obesity can only be achieved by the community coming together in order to be informed as well as be aware of the circumstances their dietary lifestyles will impact them in the future. In order to tackle the situation effectively, the population will need to focus on the guidelines that the government has set out to potentially improve the lifestyles of ourselves and other. Both surgical procedures have very efficacious responses and give the population who are obese to change their lifestyles. In addition, the surgeries could be a potential out in order to prevent the NHS from becoming bankrupt from the excessive costs of diabetes and obesity care potentially saving the company billions (Walsh, 2012). Health Survey England, P.H.E. (2016) UK and Ireland prevalence and trends: Public health England obesity knowledge and intelligence team. Available at: https://www.noo.org.uk/NOO_about_obesity/adult_obesity/UK_prevalence_and_trends (Accessed: 2 October 2016). Association, P. (2014) Cost of obesity ‘greater than war, violence and terrorism’. Available at: http://www.telegraph.co.uk/news/health/news/11242009/Cost-of-obesity-greater-than-war-violence-and-terrorism.html (Accessed: 29 September 2016). BBC (2014) What are the health risks of obesity? Available at: http://www.bbc.co.uk/science/0/21702372 (Accessed: 2 October 2016). Bupa (2014) Gastric band surgery to lose weight. Available at: http://www.bupa.co.uk/health-information/directory/g/gastric-band (Accessed: 17 October 2016). Cardwell, M. (2010) A-Z psychology handbook: Digital edition (complete a-z handbooks). 4th edn. London: Philip Allan Updates. Choices, N. (2016) Obesity. Available at: http://www.nhs.uk/conditions/obesity/pages/introduction.aspx (Accessed: 18 October 2016). Duyn, M.A.S.V., and Pivonka, E. (2000) ‘Overview of the Health Benefits of Fruit and Vegetable Consumption for the Dietetics Professional’, Selected Literature, 100(12), pp. 1511–1521. Ebbeling, C.B., Pawlak, D.B. and Ludwig, D.S. (2002) ‘Childhood obesity: public-health crisis, common sense cure’, The Lancet, 360(9331), pp. 473–482. Public Health England (2016) UK and Ireland prevalence and trends: Public health England obesity knowledge and intelligence team. Available at: https://www.noo.org.uk/NOO_about_obesity/adult_obesity/UK_prevalence_and_trends (Accessed: 18 October 2016). Roizman, T. (2016) Gastric band surgery to lose weight. Available at: http://www.bupa.co.uk/health-information/directory/g/gastric-band (Accessed: 7 October 2016). Staff, M.C. (2015) ‘Obesity prevention’, Mayoclinic, Walsh, S. (2012) The ugly truth about having a gastric bypass: The frank diary from an obesity nurse. Available at: http://www.dailymail.co.uk/health/article-2147776/The-ugly-truth-having-gastric-bypass-The-frank-diary-obesity-nurse.html (Accessed: 19 October 2016). When doing this research I attempted to find sources online using Pubmed and various online articles. Since the obesity is something I have come across beforehand, I was able to apply my knowledge and further what I already know with fairly new medical advances from the research I gathered. I feel fairly confident in my ability to draw information out of text and interpreted as well as understand what it is trying to convey. I use what the useful information and try to write in a way that summarises the text I have just read, whilst constantly referencing all work not done by myself. In terms of difficulty, I felt a lack of confidence in referencing so I do apologize in advance if I have done this wrong. I did research and chat with other students for advice on referencing and my results are as shown in the essay. I’d like an overview of my way of writing because I feel like my style of writing has not changed since A-level and am not writing scientifically, or maybe not drawing out more content from journals and online articles. Another weakness is I should have created a content page and broken my essay into subheading to show more structure in my work. mpt to decrease the increasing rise in obesity can only be achieved by the community coming together in order to be informed as well as be aware of the circumstances their dietary lifestyles will impact them in the future. In order to tackle the situation effectively, the population will need to focus on the guidelines that the government has set out to potentially improve the lifestyles of ourselves and other. Both surgical procedures have very efficacious responses and give the population who are obese to change their lifestyles. In addition, the surgeries could be a potential out in order to prevent the NHS from becoming bankrupt from the excessive costs of dia
<urn:uuid:747d76dc-e42a-4e34-8369-a0bc78dea972>
CC-MAIN-2017-04
https://docs.com/momodou-bah/9559/obesity
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950806
2,677
3.328125
3
Nov/Dec 2016 Digital Edition Oct 2016 Digital Edition Sept 2016 Digital Edition Aug 2016 Digital Edition July 2016 Digital Edition June 2016 Digital Edition May 2016 Digital Edition NYU students win award for solution to safeguard electronic voting machines BROOKLYN Dec. 9, 2016 When electronic voting machines came into use in the early 1990s, they made voting cheaper, easier, and more accessible to the electorate, but few programmers gave thought to the issue of cybersecurity. The modern Internet, however, has given rise to the possibility of cyber attacks that could catastrophically disrupt the democratic process and affect the course of a nation's history. In September 2016 the internationally recognized computer protection firm Kaspersky Lab, in partnership with The Economist, mounted a challenge inviting teams from universities around the world to design a system for digital voting that addressed such issues as ensuring privacy and validating contested results. New York University students Kevin Kirby, Anthony Masi, and Fernando Maymi took home first place in the challenge with their system, Votebook, which is secure, scalable, and consistent with current voter behavior and expectations of privacy. As per the rules of the challenge, Votebook is based on blockchain technology, which creates a distributed, irreversible, incontrovertible public ledger that has been described as double-entry accounting for the digital age. (Blockchain is best known as the apparatus that supports the alternative currency Bitcoin.) Votebook employs a "permissioned blockchain" configuration in which a central authority admits voting machines to the network prior to the start of the election, and then the voting machines act autonomously to build a public, distributed ledger of votes. Voters would still register and show up to the polls just as they do in our current system, ensuring minimal disruption of voter expectations. At the conclusion of the election, the ledger of data for each voting machine would be released to the public at large to allow for auditing. Each voter could then check to see his or her vote was counted by entering a set of unique values (voter identification, individual ballot identification) that only the voter would know – the values, when cryptographically hashed, match the entry on the ledger that represents that individual's vote; no one else would be able to decipher those hashes. Kirby, Masi, and Maymi, who were awarded $10,000 for taking first place in the challenge, are all NYU ASPIRE scholars, taking part in a program that aims to produce cybersecurity specialists who understand information-security issues from a multidisciplinary perspective. The program is based at the NYU Center for Cybersecurity (CCS) and accepts students from across the university, including the School of Law (where Kirby is a third-year student) and the Tandon School of Engineering (where Masi and Maymi are graduate students majoring in cybersecurity). "Concerns about ballot stuffing, fraud, and cyber attacks have rattled voter confidence," Maymi said. "It's time that the voting system became more transparent, and we have shown that we should and can harness the power of blockchain technology to serve democracy." "We are exceedingly proud that our ASPIRE scholars triumphed in this important challenge," said NYU Tandon Professor of Electrical and Computer Science Ramesh Karri, who co-founded CCS. "Their win is proof that interdisciplinary teams can create exceptionally secure information systems based on a deep understanding of social, behavioral, and public policy implications. With digital data becoming more and more essential in every facet of our lives – including the way in which we elect our leaders – their expertise is invaluable."
<urn:uuid:96cfa476-d55e-45b6-b531-f520ae01a1d1>
CC-MAIN-2017-04
http://gsnmagazine.com/node/47589?c=state_local_security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956251
737
2.59375
3
Chapter 2 – The Power Wall and Multicore Computers The material in this chapter is considered a continuation of the previous chapter, which covers the history of computing to about 1995 or so. This chapter presents a design problem that first appeared about 2002 to 2005. This problem is related to heat dissipation from the CPU; when it gets too hot, it malfunctions. This material is placed in a separate chapter on the chance that an instructor wants to assign it without requiring the earlier history. The reader is assumed to have a basic understanding of direct–current electronics, more specifically the relationship of power and current to electrical power dissipated. Most of this material is based on Ohm’s law. The reader is also expected to understand the concept of area density; if one million transistors are placed on a chip that has area of one square millimeter, then the transistor density is one million per square centimeter. The reader is also assumed to have a grasp of the basic law of physics that any power by an electronic circuit is ultimately emitted as heat. If we say that a CPU consumes 50 watts of electrical power, we then say that it emits 50 watts of heat. We do note that heat is usually measured in units different from those used to measure electrical power, but the two are interchangeable through well–established conversion equations. Two of the more significant (but not only) factors on total heat radiated by a chip are the voltage and transistor areal density. One version of Ohm’s law states that the heat dissipated by a transistor varies as the square of the voltage; this is important. As the important measure is heat radiated per unit area, more densely packed transistors will emit more heat per unit area than less densely packed. Introduction to the Topic It should be no news to anyone that electronic computers have progressed impressively in power since they were first introduced about 1950. One of the causes of this progress has been the constant innovation in technology with which to implement the digital circuits. The last phase of this progress, beginning about 1972, was the introduction of single–chip CPUs. These were first fabricated with LSI (Large Scale Integrated) circuit technology, and then with VLSI (Very Large Scale Integrated) circuitry. As we trace the later development of CPUs, beginning about 1988, we see a phenomenal increase in the number of transistors placed on CPU chip, without a corresponding increase in chip area. are a number of factors contributing to the increased computing power of modern CPU chips. All are directly or indirectly due to the increased transistor density found on those chips. Remember that the CPU contains a number of standard circuit elements, each of which has a fixed number of transistors. Thus, an increase in the number of transistors on a chip directly translates to an increase in either the number of logic circuits on the chip, or the amount of cache memory on a chip, or both. Specific benefits of this increase include: 1. Decreased transmission path lengths, allowing an increase in clock frequency. 2. The possibility of incorporating more advanced execution units on the CPU. For example, a pipelined CPU is much faster, but requires considerable circuitry. 3. The use of on–chip caches, which are considerably faster than either off–chip caches or primary DRAM. discussing transistor counts and transistor densities, your author (because he has a strange sense of humor) wants to introduce a off–beat measure of area that can easily be applied to measuring CPU chips. This unit is the “nanoacre”. The acre is a unit of measure normally used in land surveys. One acre equals approximately 4,050 square meters, or 4.05·109 square millimeters. Thus, one nanoacre equals 4.05 mm2, a square about 2.02 millimeters (0.08 inches) on a side. The die for the Intel Core Extreme x6800 chip has area of about 143 mm2, equal to approximately 35 nanoacres. Incidentally, the size of a typical office cubicle is about 1 milliacre. So much for geek humor. implementations of CPU chips, the increase in transistor count has followed what is commonly called “Moore’s Law”. Named for Gordon Moore, the co–founder of Intel Corporation, this is an observation on the number of transistors found on a fixed–sized integrated circuit. While not technically in the form of a law, the statement is so named because the terms “Moore’s Observation”, “Moore’s Conjecture” and “Moore’s Lucky Guess” lack the pizazz that we expect for the names of popular statements. In a previous chapter, we have shown a graph of transistor count vs. year that represents one Moore’s Law. Here is a more recent graph from a 2009 paper [R79]. The vertical axis (logarithmic scale) represents the transistor count on a typical VLSI circuit. Figure: Transistor Count on a CPU vs. Year of Production Moore’s law has little direct implication for the complexity of CPU chips. What it really says is that this transistor count is available, if one wants to use it. Indeed, one does want to use it. There are many design strategies, such as variations of CPU pipelining (discussed later in this textbook), that require a significant increase in transistor count on the CPU chip. These design strategies yield significant improvements in CPU performance, and Moore’s law indicates that the transistor counts can be increased to satisfy those strategies. increased area density of transistors means that transistors, and hence basic are placed more closely together. This shortens the transmission paths between the logic circuits and allows for an increase in clock speed. Here is a graph illustrating the increase in CPU clock speed as a function of year of production. This is copied from the first chapter of this textbook; note how it duplicates the transistor count. This is a graph of clock speed as a function of year. As we shall soon see, the values for years 2004 and 2005 might represent values achieved in test models, but they probably do not represent values found in actual production models. One way to summarize the case up to about the year 2004 is that computer CPU capabilities increasing continuously and dramatically. Here is another figure that illustrates both the effects of clock speed and technological change. It is from Stalling’s textbook [R6]. Note that the performance of a typical CPU is increasing dramatically from about 1998 to Then something happened to slow this progression. That is the subject of this chapter. Here is a clue to the problem, which is now called the “power wall”. This is taken from the textbook by Patterson & Hennessy, which is not the same as the larger, and more advanced book by the same authors that is called “Hennessy & Patterson”. The design goal for the late 1990’s and early 2000’s was to drive the clock rate up. This was done by adding more transistors to a smaller chip. Unfortunately, this increased the power dissipation of the CPU chip beyond the capacity of inexpensive cooling techniques. Here is a slide from a talk by Katherine Yelick of Lawrence Berkeley National Lab [R81] that shows the increase of power density (watts per square centimeter) resulting from the increase in clock speed of modern CPUs. One does not want to use a CPU to iron a shirt. Figure: Modern CPUs are Literally Too Hot see the effect of this heat dissipation problem by comparing two roadmaps for CPU clock speed, one from the year 2005 and one from about 2007. Here is the roadmap for the year 2005, as seen by the Intel Corporation. In 2005, it was thought that by 2010, the clock speed of the top “hot chip” would be in the 12 – 15 GHz range. In stead, the problem of cooling the chip became a major problem, resulting in the following revision of the clock rate roadmap. reflects the practical experience gained with dense chips that were literally they radiated considerable thermal power and were difficult to cool. The CPU chip (code named “Prescott” by Intel) appears to be the high–point in the actual clock rate. The fastest mass–produced chip ran at 3.8 GHz, though some enthusiasts (called “overclockers”) actually ran the chip at 8.0 GHz. Upon release, this chip was thought to generate about 40% more heat per clock cycle that earlier variants. This gave rise to the name “PresHot”. The Prescott was an early model in the architecture that Intel called “NetBurst”, which was intended to be scaled up eventually to ten gigahertz. The heat problems could never be handled, and Intel abandoned the architecture. The Prescott idled at 50 degrees Celsius (122 degrees Fahrenheit). Even equipped with the massive Akasa King Copper heat sink , the system reached 77 Celsius (171 F) when operating at 3.8 GHz under full load and shut itself down. pictures of two commercial heat sinks for Pentium–class CPUs. Note how large they are. Most users would not care to have such equipment on their computers. Figure: The Akasa Copper Heat Sink TheMugen 2 Cooler Another way to handle the heat problem would have been to introduce liquid cooling. Most variants of this scheme use water cooling, though the Cray–2 used the chemical Flourinert, originally developed for medical use. The problem with liquid cooling is that most users do not want to purchase the additional equipment required. The IBM z/10 mainframe computer is one that uses water cooling. This is a multiprocessor system based on the IBM Power 6 CPU, running at 4.67 GHz, more than 50% faster than the Intel Prescott. It is reported that lab prototypes have been run at 6 GHz. Here is the water cooling system for the z/10. It is massive. tubing feeds cold water to cooling units in direct contact with the CPU Each CPU chip is laid out not to have “hot spots”. One of the IBM laboratories in Germany has used this cooling water (warmed by the computer) to heat buildings in winter. So, we have a problem. It can be solved either by the use of massive cooling systems (not acceptable to most users of desktop computers), or come up with another design. Intel chose to adopt a strategy called “multicore”, also called “chip multiprocessor” or “CMP”. adopted by Intel Corporation was to attack the problem at its source; reduce power consumption of the CPU while maintaining or increasing performance. As early as October 2009 [R82], Intel was speaking of two time periods in the development of VLSI chips: the “traditional–scaling” period and the “post traditional–scaling” period. The dividing line between the two was set some time in the year 2003. this point, Intel and other companies are attempting to address two related 1. How to get increased performance out of a CPU without overheating it. 2. Addressing the concerns of large data centers that may have thousands of processors and want to lower their bills for electrical power and cooling. example of the second problem can be seen in organizations that might be called “scientific data centers”. These are centers that run a few large supercomputers, each of which is fabricated from thousands of processors that are networked together. One good example of such a supercomputer is the Cray–XK6. [R83] can be configured with up to 500,000 cooperating processors, organized “compute nodes” that combine AMD’s 16–core Opteron 6200 processors and NVIDIA’s Tesla X2090 GPU (Graphical Processing Unit), used as a vector processor. Typically, the computer is organized into a number of cabinets, each of which holds up to 96 compute nodes; the picture on the web site shows a 16–cabinet configuration. requires about 50 kilowatts of power, with additional power required to cool computer room. Remember that each cabinet produces about fifty kilowatts of heat, which requires power–consuming air conditioning to remove. Any reduction in the power consumption of a compute node would yield immediate benefits. Here is the characterization of the power problem given by Intel in a white paper entitled “Solving Power and Cooling Challenges for High Performance” [R84], published in June 2006. “It takes a comprehensive strategy to scale high performance computing (HPC) capabilities, while simultaneously containing power and cooling costs.” The executive summary of this presentation is worth quoting at some length. “Relief has arrived for organizations that need to pack more computing capacity into existing high performance computing (HPC) facilities, while simultaneously reducing power and cooling costs. For some time, Intel has been focused on helping IT managers address these issues, by driving new levels of energy-efficiency through silicon, processor, platform and software innovation. The results of these efforts are clearly evident in the new Dual-Core Intel® Xeon® processor 5100 series (code-name Woodcrest) and the upcoming Dual-Core Intel® Itanium® 2 processor 9000 series (code-name Montecito), which dramatically increase performance and energy-efficiency compared to previous generations.” “These and other recent innovations are major steps toward increasing density, pure performance, price/performance and energy-efficiency for HPC solutions, but they are only the beginning. Intel researchers continue to push the limits of transistor density in next-generation process technologies, while simultaneously driving down power consumption. Intel is also delivering software tools, training and support that help developers optimize their software for multi-core processors and 64-bit computing. These are essential efforts, since optimized software can substantially boost performance and system utilization, while contributing to the containment or even reduction of power consumption.” [R84] experience has shown that one way to handle the power problem of a highly pipelined with a high clock frequency is to replace this single large processor by a number of smaller and simpler processors with lower clock frequency. In effect, this places multiple CPUs on a single chip; though the terminology is to refer to multiple cores on a single CPU chip. The decreased complexity of the instruction pipeline in each core yields a reduction in the transistor count (hence transistor density) at little cost in performance for the multiple cores considered as a single CPU. As a bonus, one gets more chip area onto which a larger cache memory can be placed. Increasing the size of an on–chip cache memory is one of the most cost–effective and power–effective ways to boost performance. In a 2006 white paper, Geoff Koch described Intel’s rationale for multicore processing. “Explained most simply, multi–core processor architecture entails silicon design engineers placing two or more execution cores – or computational engines – within a single processor. This multi–core processor plugs directly into a single processor socket, but the operating system perceives each of its execution cores as a discrete logical processor with associated execution resources.” [R85] “Multi–core chips do more work per clock cycle, and thus can be designed to operate at lower frequencies than their single–core counterparts. Since power consumption goes up proportionally with frequency, multi–core architecture gives engineers the means to address the problem of runaway power and cooling requirements.” [R85] A bit later, Koch notes the following. “With Intel Core microarchitecture, each core is equipped with a nearly complete set of hardware resources, including cache memory, floating point and integer units, etc. One programming thread can utilize all these resources while another thread can use all the hardware resources on another core.” [R85] with the use of MS–Windows on a modern computer will recall that there are multiple processes under execution at any one time. These processes can be executed one at a time on a single CPU or more efficiently on a multicore CPU. In other words, the typical MS–Windows work load favors use of multicore designs. Here is a picture of the core die and a diagram of one of the more recent Intel multicore quad–core CPU called Core i7. Each execution core has its own split L1 cache as well as a level–2 cache. The four cores share a large level–3 cache. One of the key goals, evident in a number of publications [R79, R84] is to increase the system performance per watt of power used. The following is a figure from the Intel White Paper [R84] showing the increase in performance on several standard benchmarks achieved by the new design, called “Woodcrest”. As of the year 2010, Intel has announced a number of multicore offerings. Most of those available had either four or eight cores per chip. There is a reports, dated in 2009, found on Wikipedia [R86] of Intel releasing a s ingle–chip 48–core CPU “for software and circuit research in cloud computing”. The Wikipedia reference is the link [R87]. There is also a reference in the Wikipedia article to a single–chip 80–core CPU prototype [R88]. The author of this textbook has not been able to verify either claim, using only material from an Intel Corporation web site. However, each claim is probably true. Later in this textbook, we shall discuss issues of parallel computing in general. At that time, we shall introduce the term “manycore computer”, as distinct from “multicore computer”. The distinction originates from the development of the NVIDIA GPU (Graphical Processor Unit) which could feature 768 execution cores, as opposed to the 8 cores found on multicore computers of the time. There may be a dividing line of core count between the two design philosophies, but it has yet to be defined. It may never be defined. Time will tell.
<urn:uuid:9dd8653e-7665-4757-8df1-5db35581f559>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155Text_V07_HTM/MyText5155_Ch02_V07.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921336
3,990
3.984375
4
The Mars Climate Orbiter was one-half of the Mars Surveyor ‘98 program, along with the Mars Polar Lander. The orbiter’s mission was to (duh) reach Mars orbit to study the weather and climate and to, ultimately, serve as a communications relay for the lander. The orbiter, launched on December 11, 1998, never made it into orbit, though, due to a software bug in a ground based system. When attempting enter Mars orbit on September 23, 1999, the orbiter approached at a lower than expected altitude, causing it to disintegrate. The cause was ultimately determined to be that the ground based software generated and sent thrust instructions using English measurements (pound-force), while the onboard software was expecting the measurements in metric (Newtons). Ooops.
<urn:uuid:9b1ee900-3a89-4812-8147-011d9f98b718>
CC-MAIN-2017-04
http://www.cio.com/article/2368489/government/88716-8-famous-software-bugs-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960847
164
3.125
3
DNS Security Starts with the DNS Server DNS security starts with the DNS server The DNS server is best equipped to deal with DNS threats since it is where all the DNS intelligence resides. The following are four capabilities that are necessary to protect the DNS. It is worth investigating the capabilities of your DNS server to make certain all of these defenses are available and enabled.In addition, Network Address Translation (NAT), firewalls, load balancers and potentially other devices in the network may de-randomize UDP source ports, thus rendering this protection less effective. For these reasons, it is essential that other defenses are available and enabled. Defense No. 2: A secure mode of DNS operation when a potential attack is detected is another useful defense. The DNS server should be able to switch from a UDP to a TCP connection when mismatched query parameters are observed (a sign an attack may be underway). This allows an attacker only one chance to send a fake DNS answer for each fake DNS question, which both slows the progress of an attack and significantly reduces the probability of success (potentially by hundreds of times). Defense No. 3: The single most important defense provides protection when an attacker gets lucky and correctly guesses query parameters, thus beating other defenses. This defense screens DNS query responses and discards potentially harmful information in the response, such as additional information that delegates DNS answers to a server that is controlled by the attacker. This protects the DNS server in ways a firewall, IPS or any other external device cannot. Defense No. 4: The last defense to enable is alerting IT of unusual DNS activity and providing specific details so remedial action can be taken. Defense No. 1: UDP source port randomization (UDP SPR) was specified by key DNS vendors as the initial response to the Kaminsky attack. Randomizing the UDP source port used in a query makes it harder for an attacker to guess the query parameters in a fake answer. Although UDP SPR is a useful defense, there is widespread concern that it is not an adequate long-term response to cache poisoning.
<urn:uuid:ed366a1e-1b07-45d1-bbbd-248da3b438d5>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Security/How-to-Secure-Your-Network-from-Kaminskys-DNS-Cache-Poisoning-Flaw/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901273
416
2.609375
3
The National Center for Atmospheric Research (NCAR) selected Juniper Networks to provide infrastructure for a new supercomputer that will perform 5.34 quadrillion calculations per second. According to a recent press release, this new supercomputer will be installed at NCAR’s Wyoming Supercomputing Center this fall; it will be operational in 2017. With the supercomputer, researchers will be able to make more accurate predictions about the impact of weather and global warming. The supercomputer will be equipped with Juniper Networks’ QFX10008 Switch, which offers increased scalability by allowing remote users from around the world to access the tool. “The new supercomputer is expected to benchmark among the top supercomputers in the world,” said Al Kellie, Associate Director of NCAR’s Computational and Information Systems laboratory. “The network will allow scientists around the world to access resources and foster a community of global collaboration.” Using the supercomputer to conduct data calculations on climate modeling, scientists will be able to better inform emergency rescue teams and help governments plan for changes in water cycles. Juniper Networks’ infrastructure is designed to accommodate the bandwidth required by researchers processing data and conducting weather analysis. “We’re thrilled that NCAR turned to Juniper Networks to meet its most demanding challenge yet,” said Tim Solms, vice president of U.S. Federal sales for Juniper Networks. “Juniper is proud to support the creation of a state-of-the-art supercomputing platform that will allow scientists and researchers to study the impact of climate and weather on the world’s populations and environments.”
<urn:uuid:b2fc281f-a20b-4c57-a11b-3182f4dbe5c8>
CC-MAIN-2017-04
https://www.meritalk.com/articles/supercomputer-to-boost-global-warming-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893692
345
2.96875
3
An article at Computerworld explores whether the limitations of traditional interconnects will soon make the Moore’s Law debate (it’s dead, it’s not dead) irrelevant. Author Lamont Wood, argues that the interconnect bottleneck poses the greatest threat to performance improvements. Moving data optically, via silicon photonics, may provide a way of resolving the data traffic jam. The article cites Linley Gwennap, principal analyst at The Linley Group, who believes that processors will come up aginst a performance wall in as little as five to ten years. It won’t matter how fast they are if the data flow is restricted. There are range and speed limitations associated with copper interconnects. According to Marek Tlalka, marketing vice president at photonics vendor Luxtera Inc., at 40 gigabits per second bandwidth, copper’s current top-speed, the signal falls to inches. Another expert at market research firm In-Stat questions whether transmission rates for copper will ever get to 100 gigabits per second. With silcon photonics, data moves through paths constructed of laser light beams. The data paths can cross each other without interference and different signals can even share the same path as long as they use different wavelengths (i.e., colors). Silicon photonics is also extremely energy-efficient and can transfer data over longer distances, about 10 kilometers (around 6 miles). Until recently, optical componentry was quite expensive, often prohibitively so, but manufacturing advances have brought costs down. Also, by their nature photonics structures are large relative to electrical components, but there is still room to pack the components tighter than is done with photonics currently, explains Luxtera’s Tlalka. Another work-around to the size problem is to use external lasers, as Luxtera does, instead of building a source of laser light directly onto the chip. Intel is a big proponent of optical technology and so is Intel Labs Director Justin Rattner: “We felt that over the long term we have got to be moving data optically. Conventional electrical cables have too many physical limitations, while fiber has basically no limitations,” he states. Rattner sees a near future where customers will be given a choice between an electrical or an optical interface and believes a terabit of bandwidth will be achieved by the end of the decade.
<urn:uuid:7c7895c1-06c6-4251-82c0-dfa2eab7bfd0>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/02/09/without_silicon_photonics_moores_law_wont_matter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00526-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918582
494
2.640625
3
A "natural" language is any language spoken or written by humans, such as English, rather than a computer language, but there are areas where the two intersect. For example, a user performing a natural language search would enter a question in plain English rather than depending on keywords, such as those that trigger a Google search. Ideally, a true natural language query should return a single, very precise answer with a high degree of confidence rather than the many, many results a search engine delivers, leaving the user to determine what is relevant. No disrespect to Google (whose original creative breakthrough has proven to be very important and valuable) or to other search engines like Microsoft's Bing, but solving the natural language problem is difficult. The problem is not just in syntactic analysis (computers have been able to parse sentences for some time), but rather in dealing with the subtleties, nuances and ambiguities common in natural language. Faux natural language attempts have failed; the best known is probably Ask Jeeves, which morphed into Ask.com and now is simply a front-end to a conventional search engine. It was a minor league attempt to a major league problem. Some have suggested that artificial intelligence (AI) would offer a natural answer to the natural language question, and it has had its fits and starts over the years. But, even though AI can now point to a number of successes (primarily in well-defined domains), it has only provided partial solutions to the natural language problem. IBM decided to take on natural language search as a Grand Challenge--an IBM R&D project that is technologically important and difficult, but whose success is easy for the average person to understand and meaningful for business and society. A past IBM Grand Challenge was the company's Deep Blue, a supercomputing system that defeated the human world chess champion Gary Kasparov in 1997.
<urn:uuid:3ac6c844-9de8-45e9-b26d-5012f4e860d2>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/ibms-watson-watershed-event-information-technology-and-society/903949206
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00345-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965218
381
3.265625
3
The Internet ranks somewhere between fire and sliced bread on the world’s list of greatest inventions. But despite being a fairly recent invention, its exact origin remains a point of dispute. Recently, writers from The Wall Street Journal and Scientific American weighed in on the issue, drawing comments from Google’s own Internet forefather Vint Cerf. Just Ask Al Al Gore famously blundered his way through a CNN interview in which he stated, “During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth and environmental protection, improvements in our educational system.” His statement was, most likely, just a poor choice a words. While it sounds like Gore was trying to say that he partially supported the creation of the Internet through legislation along with many others, he did say the words: “I took the initiative in creating the Internet,” leaving himself the option to take credit if anyone wanted to give it to him. It is generally agreed that Gore is not personally responsible for single-handedly creating the Internet, but he may have played at least a partial role in fostering its creation through federal legislation. And many people believe that the federal government essentially created the Internet through research and legislation. Private Enterprise Is Responsible But the government did not create the Internet, L. Gordon Crovitz wrote in a recent op-ed for The Wall Street Journal. The government envisioned a World Wide Web as early as the 1940s and went on to develop the Pentagon’s Advanced Research Projects Agency Network (ARPANET). However, that network did not lead to the Internet we have today, Crovitz wrote. Crovitz contends it was Xerox that invented the Internet, though the company wasn’t quite sure what it had. Xerox used its computer networks to share copiers, because that was the company's business, but that’s where the idea stopped. When Steve Jobs visited Xerox in 1979 to borrow some ideas, he may have seen something bigger. "They just had no idea what they had," Jobs said. The government had many of today’s Internet’s integral pieces, such as TCP/IP, but never put them together, Crovitz wrote. It was ultimately private enterprise that made the connections to create the Internet we have today, Crovitz wrote – government just needed to get out of the way. Actually, the government did invent the Internet and Crovitz doesn’t really understand what he’s talking about, according to a Scientific American rebuttal written by Michael Moyer. No private company could have accomplished such a huge undertaking as the Internet, he wrote. Crovitz is confused about technology, Moyer wrote. Just because Xerox invented Ethernet, doesn’t mean it also invented “the” Internet – it didn’t, Moyer wrote. Connecting several computers together isn’t the same thing as a worldwide computer network. Robert Metcalfe, a researcher at Xerox PARC who co-invented the Ethernet protocol, jokingly referenced the idea on July 23 in a tweet that read, “Is it possible I invented the whole damn Internet?” “The most important part of what we now know of as the Internet is the TCP/IP protocol, which was invented by Vincent Cerf [sic] and Robert Kahn,” Moyer wrote. “Crovitz mentions TCP/IP, but only in passing, calling it (correctly) ‘the Internet’s backbone.’ He fails to mention that Cerf and Kahn developed TCP/IP while working on a government grant.” Moyer also pointed out that several others criticized Crovitz for his misunderstandings, perhaps most notably the author of Dealers of Lightning, a history of Xerox PARC that Crovitz used as his main source of material. “While I’m gratified in a sense that he cites my book,” Michael Hiltzik wrote, “it’s my duty to point out that he’s wrong. My book bolsters, not contradicts, the argument that the Internet had its roots in the ARPANET, a government project.” Actually, I Invented the Internet In a recent interview published by CNET, Cerf, one of the creators of the TCP/IP protocol, responded to Crovitz’s piece, rejecting most his ideas, which he characterized as a “revisionist interpretation.” The Internet did start with the ARPANET project and the federal government directly funded the creation of the Internet we know today, Cerf wrote. And Xerox deserves credit for great work, Cerf wrote, including creation of the Ethernet protocol, the ALTO personal computer, the Xerox Network System and PARC Universal Packet. “XEROX did link homogenous Ethernets together but the internetworking method did not scale particularly well,” Cerf wrote. Ultimately, it was the work of researchers around the world from dozens of organizations that created the Internet. “After our initial paper was published, detailed design was conducted at Stanford during 1974 and implementation started in 1975 at Stanford, BBN and University College London. After that, a number of other institutions, notably MIT, SRI, ISI, UCLA, NDRE, engaged heavily in the work,” Cerf wrote. As for Crovitz’s declaration that the TCP/IP protocol languished for decades in the hands of government, only to be set free by private enterprise, Cerf responded, “I would happily fertilize my tomatoes with Crovitz's assertion.”
<urn:uuid:a8b9caf3-c4f8-4527-a2e5-930310f36ccc>
CC-MAIN-2017-04
http://www.govtech.com/e-government/164037416.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964999
1,213
2.640625
3
This document will introduce OpenVPN as a free, secure and easy to use and configure SSLbased VPN solution. The document will present some simple (and verified) scenario’s that might be useful for preparing security/networking labs with students, for creating a remote access solution or as a new project for the interested home user. All scenarios presented in this document have been tested using a mix of Red Hat Linux Fedora Core 2, Microsoft Windows 2000 Professional and Microsoft Windows XP Professional. However, this document comes with no support and no guarantees. This document supposes that the reader knows the fundamental basics of the Linux and Microsoft operating systems, basic VPN technology and IP routing. The document will not explain how to build firewall rule bases, it will not explain (the used) technologies like “IP”, “VPN”, “firewalls”, “SSL”, “PKI”, “certificates”, “IPSEC” etc. Download the paper in PDF format here.
<urn:uuid:ef6c1685-1516-4b43-958c-b4f771d78ab4>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2004/08/12/openvpn-101-introduction-to-openvpn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868817
217
2.75
3
Have a question for us? Use our contact sales form: I just returned from a weeklong meeting of the Internet Engineering Task Force (IETF), which just held its 77th meeting in Anaheim, California. I've been going to these meetings on and off since 1996 and it is interesting to me how communications during the meetings have changed since I first got involved. The IETF is the standards group which has devised most of the commonly used Internet protocol standards such as HTTP for the web, SMTP for email, TCP-IP for transport and many others. So, how does an Internet standards group like the IETF communicate during its meetings? In the Nineties, it was mostly via email. Some meeting information would be posted on the web - and often not - and if you weren't attending in person, the next communication you would see might be an email out to a mailing list for a particular working group which included draft minutes. There were some large meetings that used a multicast protocol called the MBONE, but this was mostly of use to academics and research labs, not to the typical business user. The first big breakthrough for more plentiful communication came when WiFi broke out within the IETF meetings during the years 1999-2000. Up until then, all IP connections at the meeting were hard wired via Ethernet. But Wifi was made widely available by renting network cards for laptops and setting up a few access points. Instantly, the meeting dynamics changed. Now, attendees and non-attendees could exchange email in real time about the meetings - a kind of early instant messaging. This kind of cross talk quickly became very popular and universal WiFi access became a "must have" for IETF meetings from that point forward. By about 2002, instant messaging (IM) had become popular as one of the new Internet applications and the IETF began working on standards related to IM, creating two approaches: Simple, designed to work over SIP and XMPP, which evolved from the Jabber community. Many IETF long time attendees jumped on the Jabber bandwagon and very soon, Jabber clients became a new way of participating at meetings. Jabber supports buddy lists, instant messaging and chat rooms, so the IETF began to set up Jabber chat rooms for all meetings, a practice that continues to this day. Starting in about 2005, the IETF also began to stream audio versions of its meeting for the benefit of remote participants. So between the mailing lists, Jabber chat rooms and audio streaming, remote people could now participate in the meetings in real time through multiple tools. An article talking about more this can be found in the IETF Journal from May 2007. Now, let's fast forward to this latest meeting in March 2010. All of the real time communication technologies available since 2005 are now still in use. At this meeting, a few sessions were also now accessible via WebEx web conferences, to enable sharing of documents and audio conferencing. And now you can also add social media to the list, in the form of Twitter. I've only recently started using Twitter and tools like blogs or social networks are still the most common forms of social media for many users. But I did a search on IETF on Twitter during this meeting and found quite a few comments being posted, notably using the posting code #IETF77. For example, there was a fascinating presentation on Internet User Privacy which went on during the conference, so I did a tweet to that effect on Thursday evening including the web address for the presentation and voila, users monitoring #IETF77 on Twitter could click on the embedded link and see the presentation for themselves. Just minutes later, another Twitter user did a re-tweet of my post, helping to spread it around the 'Net. So, does the IETF use real time communications? Yes, and lots of it. If you really want to track what's happening in real time at an IETF meeting and you're offsite, you can do it. In turn, there's ample opportunity for attendees to chat amongst themselves and with remote people using the tool of their choice, whether it is email, Jabber chat rooms or Twitter.
<urn:uuid:2f005de9-e43f-497e-89cb-0db718198044>
CC-MAIN-2017-04
http://www.dialogic.com/den/d/b/corporate/archive/2010/03/29/real-time-communications-and-ietf-standards-development.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968239
851
2.75
3
A replica is a copy or an instance of a user-defined partition that is distributed to an eDirectory server. If you have more than one eDirectory server on your network, you can keep multiple replicas (copies) of the directory. That way, if one server or a network link to it fails, users can still log in and use the remaining network resources (see Figure 1-18). Figure 1-18 eDirectory Replicas Each server can store more than 65,000 eDirectory replicas. However, only one replica of the same user-defined partition can exist on the same server. For a complete discussion of replicas, see Section 6.0, Managing Partitions and Replicas. We recommend that you keep three replicas for fault tolerance of eDirectory (assuming you have three eDirectory servers to store them on). A single server can hold replicas of multiple partitions. A replica server is a dedicated server that stores only eDirectory replicas. This type of server is sometimes referred to as a DSMASTER server. This configuration is popular with some companies that use many single-server remote offices. The replica server provides a place for you to store additional replicas for the partition of a remote office location. It can also be a part of your disaster recovery planning, as described in Using DSMASTER Servers as Part of Disaster Recovery Planning. eDirectory replication does not provide fault tolerance for the server file system. Only information about eDirectory objects is replicated. You can get fault tolerance for file systems by using the Transaction Tracking System™ (TTS™), disk mirroring/duplexing, RAID, or NetIQ Replication Services (NRS). A master or read/write replica is required on servers that provide bindery services. If users regularly access eDirectory information across a WAN link, you can decrease access time and WAN traffic by placing a replica containing the needed information on a server that users can access locally. The same is true to a lesser extent on a LAN. Distributing replicas among servers on the network means information is usually retrieved from the nearest available server. eDirectory supports the types of replicas shown in the following figure: Figure 1-19 Replica Types The master replica is a writable replica type used to initiate changes to an object or partition. The master replica manages the following types of eDirectory partition operations: Adding replicas to servers Removing replicas from servers Creating new partitions in the eDirectory tree Removing existing partitions from the eDirectory tree Relocating a partition in the eDirectory tree The master replica is also used to perform the following types of eDirectory object operations: Adding new objects to the eDirectory tree Removing, renaming, or relocating existing objects in the eDirectory tree Authenticating objects to the eDirectory tree Adding new object attributes to the eDirectory tree Modifying or removing existing attributes By default, the first eDirectory server on your network holds the master replica. There is only one master replica for each partition at a time. If other replicas are created, they are read/write replicas by default. If you’re going to bring down the server holding a master replica for longer than a day or two, you can make one of the read/write replicas the master. The original master replica automatically becomes read/write. A master replica must be available on the network for eDirectory to perform operations, such as creating a new replica or creating a new partition. eDirectory can access and change object information in a read/write replica as well as the master replica. All changes are then automatically propagated to all replicas. If eDirectory responds slowly to users because of delays in the network infrastructure, like slow WAN links or busy routers, you can create a read/write replica closer to the users who need it. You can have as many read/write replicas as you have servers to hold them, although more replicas cause more traffic to keep them synchronized. The read-only replica is a readable replica type used to read information about all objects in a partition’s boundaries. Read-only replicas receive synchronization updates from master and read/write replicas but don’t receive changes directly from clients. If login update is enabled then login to read only replica fails as it involves attribute updates. This replica type is not able to provide bindery emulation, but it does provide eDirectory tree fault tolerance. If the master replica and all read/write replicas are destroyed or damaged, the read-only replica can be promoted to become the new master replica. It also provides NDS Object Reads, Fault Tolerance (contains all objects within the Partition boundaries), and NDS Directory Tree Connectivity (contains the Partition Root object). A read-only replica should never be used to establish a security policy within a tree to restrict the modification of objects, because the client can always access a read/write replica and still make modifications. There are other mechanisms that exist in the directory for this purpose, such as using an Inherited Rights Filter. For more information, see Inherited Rights Filter (IRF). Filtered read/write replicas contain a filtered set of objects or object classes along with a filtered set of attributes and values for those objects. The contents are limited to the types of eDirectory objects and properties specific in the host server's replication filter. Users can read and modify the contents of the replica, and eDirectory can access and change selected object information. The selected changes are then automatically propagated to all replicas. With filtered replicas, you can have only one filter per server. This means that any filter defined for a server applies to all filtered replicas on that server. You can, however, have as many filtered replicas as you have servers to hold them, although more replicas cause more traffic to keep them synchronized. For more information, see Filtered Replicas. Filtered read-only replicas contain a filtered set of objects or object classes along with a filtered set of attributes and values for those objects. They receive synchronization updates from master and read/write replicas but don’t receive changes directly from clients. Users can read but not modify the contents of the replica. The contents are limited to the types of eDirectory objects and properties specific in the host server's replication filter. For more information, see Filtered Replicas. Subordinate reference replicas are system-generated replicas that don’t contain all the object data of a master or a read/write replica. Subordinate reference replicas, therefore, don’t provide fault tolerance. They are internal pointers that are generated to contain enough information for eDirectory to resolve object names across partition boundaries. You can’t delete a subordinate reference replica. eDirectory deletes it automatically when it is not needed. Subordinate reference replicas are created only on servers that hold a replica of a parent partition but no replicas of its child partitions. If a replica of the child partition is copied to a server holding the replica of the parent, the subordinate reference replica is automatically deleted. Filtered replicas contain a filtered set of objects or object classes along with a filtered set of attributes and values for those objects. For example, you might want to create a set of filtered replicas on a single server that contains only User objects from various partitions in the eDirectory tree. In addition to this, you can choose to include only a subset of the User objects’ data (for example, Given Name, Surname, and Telephone Number). A filtered replica can construct a view of eDirectory data onto a single server. To do this, filtered replicas let you create a scope and a filter. This results in an eDirectory server that can house a well-defined data set from many partitions in the tree. The descriptions of the server’s scope and data filters are stored in eDirectory and can be managed through the Server object in iManager. A server hosting one of more filtered replicas has only a single replication filter. Therefore, all filtered replicas on the server contain the same subset of information from their respective partitions. The master partition replica of a filtered replica must be hosted on an eDirectory server running eDirectory 8.5 or later. Filtered replicas can Reduce synchronization traffic to the server by reducing the amount of data that must be replicated from other servers. Reduce the number of events that must be filtered by NetIQ Identity Manager. For more information on NetIQ Identity Manager, see the NetIQ Identity Manager 4.0.2 Administration Guide. Reduce the size of the directory database. Each replica adds to the size of the database. By creating a filtered replica that contains only specific classes (instead of creating a full replica), you can reduce the size of your local database. For example, if your tree contains 10,000 objects but only a small percentage of those objects are Users, you could create a filtered replica containing only the User objects instead of a full replica containing all 10,000 objects. Other than the ability to filter data stored in a local database, the filtered replica is like a normal eDirectory replica and it can be changed back to a full replica at any time. NOTE:Filtered replicas by default will have the Organization and the Organizational Unit as mandatory filters. For more information on setting up and managing filtered replicas, see Section 6.6, Setting Up and Managing Filtered Replicas. In addition to selecting theoption in iManager, to allow local logins to a Filtered Replica, you should also add the class ndsLoginProperties to the filter.
<urn:uuid:d824869a-3374-4a87-990f-d0d971d96d32>
CC-MAIN-2017-04
https://www.netiq.com/documentation/edir88/edir88/data/fbaecheh.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893253
2,017
3.53125
4
Jan. 22 — Scientists around the globe are using Cornell CatchAll software to perform more accurate statistical analyses in fields ranging from microbial ecology to viral metagenomics. Developed by Computer and Information Science professor John Bunge and Cornell Center for Advanced Computing database designers, CatchAll has become the standard package for population diversity analysis. In the January 2014 publication of Microbial Ecology, scientists report using CatchAll in the analysis of soil contaminated with heavy metals, a pervasive problem in the vicinity of mines and industrial facilities in Southern Poland. Little is known about most bacterial species thriving in such soils and even less about a core bacterial community. Marcin Golebiewski and colleagues at Nicolaus Copernicus University used 16S rDNA pyrosequencing and CatchAll to assess the influence of heavy metals on both bacterial diversity and community structure. It was found that Zinc had the biggest impact in decreasing both diversity and species richness. Understanding biodiversity in polluted areas helps scientists to quantify the detrimental effects of human activity on particular taxonomic groups and to monitor bioremediation efforts. In another recent study published in Clinical and Vaccine Immunology, Patricia Diaz and colleagues at The University of Connecticut conducted the first comprehensive evaluation of long-term organ transplant immunosuppression on the oral bacterial microbiome. Many organ transplant patients require lifelong immunosuppression in order to prevent transplant rejection. This study found that prednisone had the most significant effect on bacterial diversity and on the colonization of potentially opportunistic pathogens. The researchers used Catchall to calculate the number of observed operational taxonomic units (OTU) and number of estimated OTUs in order to determine species richness. The latest version of CatchAll was updated in October 2013 and is available for download. In spring 2014 John Bunge, with Cornell Department of Statistical Sciences Ph.D. student Amy Willis, will release a new software package called breakaway. Written in R, breakaway implements a radical new statistical approach to diversity estimation based on a little-known thread in probability distribution theory, which exploits ratios of sample counts. A beta version of breakaway is currently available for testing by contacting the authors. Source: Cornell University Center for Advanced Computing
<urn:uuid:16e40b05-e319-49df-a8c2-1ad1384c77aa>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/scientists-utilizing-cornell-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908069
443
3.0625
3
Wiedner E.B.,Ringling Brothers and Barnum and Bailey Center for Elephant Conservation | Peddie J.,Drs. Peddie | Peddie L.R.,Drs. Peddie | Abou-Madi N.,Cornell University | And 11 more authors. Journal of Zoo and Wildlife Medicine | Year: 2012 Three captive-born (5-day-old, 8-day-old, and 4-yr-old) Asian elephants (Elephas maximus) and one captive-born 22-yr-old African elephant (Loxodonta africana) from three private elephant facilities and one zoo in the United States presented with depression, anorexia, and tachycardia as well as gastrointestinal signs of disease including abdominal distention, decreased borborygmi, tenesmus, hematochezia, or diarrhea. All elephants showed some evidence of discomfort including agitation, vocalization, or postural changes. One animal had abnormal rectal findings. Nonmotile bowel loops were seen on transabdominal ultrasound in another case. Duration of signs ranged from 6 to 36 hr. All elephants received analgesics and were given oral or rectal fluids. Other treatments included warm-water enemas or walking. One elephant underwent exploratory celiotomy. Three animals died, and the elephant taken to surgery was euthanized prior to anesthetic recovery. At necropsy, all animals had severe, strangulating intestinal lesions. Copyright © 2012 by American Association of Zoo Veterinarians. Source Wong A.W.,University of Florida | Wong A.W.,Florida Fish And Wildlife Conservation Commission | Wong A.W.,University of Queensland | Bonde R.K.,University of Florida | And 9 more authors. Aquatic Mammals | Year: 2012 West Indian manatees (Trichechus manatus) are captured, handled, and transported to facilitate conservation, research, and rehabilitation efforts. Monitoring manatee oral temperature (OT), heart rate (HR), and respiration rate (RR) during out-of-water handling can assist efforts to maintain animal well-being and improve medical response to evidence of declining health. To determine effects of capture on manatee vital signs, we monitored OT, HR, and RR continuously for a 50-min period in 38 healthy, awake, juvenile and adult Florida manatees (T. m. latirostris) and 48 similar Antillean manatees (T. m. manatus). We examined creatine kinase (CK), potassium (K+), serum amyloid A (SAA), and lactate values for each animal to assess possible systemic inflammation and muscular trauma. OT range was 29.5 to 36.2° C, HR range was 32 to 88 beats/min, and RR range was 0 to 17 breaths/5 min. Antillean manatees had higher initial OT, HR, and RR than Florida manatees (p < 0.001). As monitoring time progressed, mean differences between the subspecies were no longer significant. High RR over monitoring time was associated with high lactate concentration. Antillean manatees had higher overall lactate values ([mean ± SD] 20.6 ± 7.8 mmol/L) than Florida manatees (13.7 ± 6.7 mmol/L; p < 0.001). We recommend monitoring manatee OT, HR, and RR during capture and handling in the field or in a captive care setting. Source Miller M.,Disneys Animal Programs and Environmental Initiatives | Weber M.,Disneys Animal Programs and Environmental Initiatives | Valdes E.V.,Disneys Animal Programs and Environmental Initiatives | Neiffer D.,Disneys Animal Programs and Environmental Initiatives | And 3 more authors. Journal of Zoo and Wildlife Medicine | Year: 2010 A combination of low serum calcium (Ca), high serum phosphorus (P), and low serum magnesium (Mg) has been observed in individual captive ruminants, primarily affecting kudu (Tragelaphus strepsiceros), eland (Taurotragus oryx), nyala (Tragelaphus angasii), bongo (Tragelaphus eurycerus), and giraffe (Giraffa camelopardalis). These mineral abnormalities have been associated with chronic laminitis, acute tetany, seizures, and death. Underlying rumen disease secondary to feeding highly fermentable carbohydrates was suspected to be contributing to the mineral deficiencies, and diet changes that decreased the amount of starch fed were implemented in 2003. Serum chemistry values from before and after the diet change were compared. The most notable improvement after the diet change was a decrease in mean serum P. Statistically significant decreases in mean serum P were observed for the kudu (102.1-66.4 ppm), eland (73.3-58.4 ppm), and bongo (92.1-64.2 ppm; P < 0.05). Although not statistically significant, mean serum P levels also decreased for nyala (99.3-86.8 ppm) and giraffe (82.6-68.7 ppm). Significant increases in mean serum Mg were also observed for kudu (15.9-17.9 ppm) and eland (17.1-19.7 ppm). A trend toward increased serum Mg was also observed in nyala, bongo, and giraffe after the diet change. No significant changes in mean serum Ca were observed in any of the five species evaluated, and Ca was within normal ranges for domestic ruminants. The mean Ca:P ratio increased to greater than one in every species after the diet change, with kudu, eland, and bongo showing a statistically significant change. The results of this study indicate that the diet change had a generally positive effect on serum P and Mg levels. Copyright 2010 by American Association of Zoo Veterinarians. Source
<urn:uuid:2e35398b-9d20-4b81-81d3-620a689e89e5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/disneys-animal-programs-and-environmental-initiatives-730340/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907198
1,262
2.546875
3
According to the United Nations, the world's population is expected to grow to as many as 11 billion people by 2050 -- more than half of those people will reside in cities versus the countryside. To properly support rapidly-growing, densely-populated urban areas, technological advancement will be paramount, according to a recent editorial in InformationWeek, whose Future Cities Survey found that the U.S.'s major urban centers have a long way to go when it comes to technology. In the context of preparing for future population growth, only 7 percent of the 198 municipal IT professionals surveyed from major U.S. cities reported their city IT strategy as “progressive and well conceived.” Conversely, 38 percent of those surveyed described their city's IT plan as “poor or nonexistent.” Cities of the future, according to the editorial, will need an interconnected and sophisticated infrastructure that includes the city's buildings, roadways, rail systems, electric grid and water facilities to effectively serve the populace. The most prominent obstacle cited by survey takers was money, with 88 percent citing lack of funding as the primary obstacle to technological progress in their city. And mayors and other city officials need private-sector help to advance. According to the survey, 66 percent of respondents cited public-private collaboration as what should lead Future Cities efforts. Improving K-12 education, expanding access to wireless and broadband networks and ensuring the cybersecurity of critical infrastructure were listed as most promising areas for such collaboration. Conceiving a smarter city that is prepared for unrelenting population growth will be a matter of developing many different technologies in parallel, according to the editorial. Communications infrastructure, mobile device support, transportation systems, public safety and crime prevention investments, and surveillance technology all play a role in the future of large urban areas.
<urn:uuid:02fd7da5-00c9-493a-895a-d3fd3a1a5687>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Study-City-Planners-Unprepared-for-Future.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953186
366
2.984375
3
Happy Earth Day! As we take today to reflect on environmental impact, let’s take a look at some solid data to see how much energy and carbon dioxide emissions are really saved by data center energy efficiency and renewable energy use. The recent memo from the Greed Grid reintroducing their metrics for data center efficiency provides a great jumping off point to estimate the environmental impact of an average data center. With recent headlines like Apple’s shift towards renewables and Google’s funding of wind farms, not to mention our own efforts at Green House Data to improve efficiency and overall Power Usage Effectiveness (PUE), many in the data center industry are curious about the actual data at hand. If a company improves 2% of its overall carbon footprint through efficient or renewably powered data centers, what does that actually mean? Is it a large impact or just a PR opportunity? Summary of Data Center Emission Data and Method We took a theoretical 10 MW facility and assumed it was operating at capacity for simplicity of math and comparison. We measured this facility’s emissions at 1.8 and 1.2 PUE to see how improving operations would affect emissions. Using the Carbon Usage Effectiveness (CUE) metric, which is a combination of PUE and the Carbon Dioxide Emissions Factor (CEF) of a location, we discovered: eGrid Subregions – Find Your Carbon Usage Effectiveness The United States electricity grid is divided into various subregions called eGrid regions (Emissions and General Resource Integrated Database). This database tracks air emissions rates, net generation, resource mix and more. The EPA provides this information and also the geographical division of eGrids, which we subsequently placed on a Google Map: If you select the region of your data center facility location, you can see the Greenhouse Gas Emissions Factor and CEF to help calculate your own CUE (Carbon Usage Effectiveness). CEF is the Carbon Dioxide Emission Factor, measured in kilograms of CO2 emitted per kilowatt-hour (kgCO2eq/kWh), as described by the Greed Grid. The EPA reports the Greenhouse Gas Emissions Factor (GGEF) of each eGrid subregion. The formula for CEF is as follows: CEF = ( Greenhouse Gas Emissions Factor / 1 BTU) / 1000 The Greenhouse Gas Emissions factor is measured in kg CO2 emitted for each MBtu, so dividing it by the value of 1 BTU in Watt Hours (which is 0.293 wH) describes the amount of CO2 emitted in kg per Megawatt-Hour. Dividing that amount by 1000 provides the CEF in kWh. As an example, the Rocky Mountain eGrid is RMPA and has a GGEF of 254.6387. Any facility located in the RMPA grid region has the following CEF: 254.6387 kg Co2e/MBtu / 0.293Wh = 869.07 KgCo2e/MWh = 0.86907 KgCO2e/kWh The Annual Emissions of a 10 MW Data Center To find the total emissions in kg of a 10 megawatt data center facility, we first have to calculate the Carbon Usage Effectiveness (CUE) at both 1.8 and 1.2 PUE. The CEF is multiplied by PUE to get CUE, which is multiplied by annual energy use in kWh to find the total emissions of a data center in kg of CO2. CEF * PUE = CUE CUE * Total Annual Energy Draw = Annual Emissions For our 10 MW facility, if it were in the RMPA grid region, the calculation would go as follows: PUE (10 MW Facility in RMPA Grid Region) Carbon Usage Effectiveness Annual Emissions (kg) 0.86907 kgCO2e/kWh * 1.8 PUE = 1.564 CUE (1.564 CUE * 8,765.81 hours per year) * 10,000 kW = 137,126,485.77 kg annual emissions 0.86907 kgCO2e/kWh * 1.2 PUE = 1.043 CUE (1.043 CUE * 8,765.81 hours) * 10,000 kW = 91,417,657.18 kg annual emissions The chart below shows the annual emissions of a 10 MW facility in each subregion at 1.8 and 1.2 PUE. Although PUE is a variable metric and has been accused of manipulation in the past, if we assume a legitimate measurement, lowering PUE from 1.8 to 1.2 can result in millions of pounds of CO2 saved from the atmosphere. This chart shows the amount of CO2 saved in each grid region (calculated as [Emissions at 1.8 – emissions at 1.2] * 2.20462 lbs/kg). Once again, this is for a 10 MW facility. The data points to two interesting conclusions, only one of which is really controllable by data center operators: (1) lowering PUE delivers dramatic reductions in carbon footprint and (2) the electrical grid region will significantly impact the emissions level of data centers. For many companies including Green House Data and our favorite headline-grabbers like Google and Facebook, another weapon in the fight against emissions is renewable energy sources. The big companies are constructing their own renewable generation or investing in large scale privately owned wind farms and solar fields, removing themselves from the grid subregions entirely. Other companies purchase Renewable Energy Credits, meaning they are still impacted by the efficiency of their local, dirty grid; but at least are making an investment to reduce the ultimate carbon footprint of data center operations. In either case, it will be interesting to see if data center operators large and small begin to measure their Carbon Usage Effectiveness and total emissions on a yearly basis. Of course, these calculations are for regular operation pulling off the standard grid only and do not take into account factors like diesel generators, office supplies, executive travel, etc. But the above charts, maps, and formulas can at least help get operators on their way to measuring carbon footprint of data centers. Posted By: Joe Kozlowicz
<urn:uuid:737ee82e-1d34-4467-93a0-034f6ce336d3>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/the-truth-about-data-center-carbon-emissions-and-pue
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91264
1,307
2.953125
3
I think it’s fair to say that Google has done some pretty innovative things. They seem to be a smart and focused bunch of people. Not surprisingly, companies with talented, focused people can be quite innovative. When you throw billions of dollars of funding behind them, the results can be pretty impressive. Recently, Google’s use of recycled water as a part of their data center operations created a lot of press. A blog from their Facilities Manager, Jim Brown, revealed that Google was using recycled water to cool 100% their data center here in Georgia. The use of recycled water for cooling is pretty smart. Data centers are huge consumers of resources. Unless they are built and managed intelligently, they can have significant environmental impacts. Using recycled water allows a data center provider or colocation facility to avoid placing additional strain on the local water supply because recycled water has already been used once and has not yet been returned to the environment. According to the blog, Google intercepts recycled water from the local water authority, treats it further and uses it for heat exchange in their data center cooling. A portion of the water that is used evaporates. The rest of the water is treated again and returned to the environment in a clean, clear and safe form. Google has done something truly impressive here, both by providing a clever solution that addresses an environmental concern associated with their data centers and increasing the awareness of intelligent options to lessen the environmental impact of data center operations. And of course, it’s nice to see another group of smart, focused data center individuals coming to the same conclusion we did around recycled water use. Internap has been using recycled water to cool its data center in Santa Clara, CA since it opened in 2010. While we knew it was making a difference for us, the third party validation from Google sure feels good. What other green practices are you interested in? Visit our SlideShare page and download our presentation from the recent Green Data Center Conference in Dallas to learn more.
<urn:uuid:1d0309b8-3899-4702-b969-385276f96750>
CC-MAIN-2017-04
http://www.internap.com/2012/04/12/recycled-water-good-for-google-too/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00051-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961337
406
2.828125
3
2.4.6 What are some techniques against hash functions? The essential cryptographic properties of a hash function are that it is both one-way and collision-free (see Question 2.1.6). The most basic attack we might mount on a hash function is to choose inputs to the hash function at random until either we find some input that will give us the target output value we are looking for (thereby contradicting the one-way property), or we find two inputs that produce the same output (thereby contradicting the collision-free property). Suppose the hash function produces an n-bit long output. If we are trying to find some input that will produce a given target output value y, then since each output is equally likely we expect to have to try on the order of 2n possible input values. A birthday attack is a name used to refer to a class of brute-force attacks. If some function, when supplied with a random input, returns one of k equally-likely values, then by repeatedly evaluating the function for different inputs, we expect to obtain the same output after about 1.2k1/2 trials. If we are trying to find a collision, then by the birthday paradox we would expect that after trying 1.2(2n/2) possible input values we would have some collision. Van Oorschot and Wiener [VW94] showed how such a brute-force attack might be implemented. With regard to the use of hash functions in the provision of digital signatures, Yuval [Yuv79] proposed the following strategy based on the birthday paradox, where n is the length of the message digest: - The adversary selects two messages: the target message to be signed and an innocuous message that Alice is likely to want to sign. - The adversary generates 2n/2 variations of the innocuous message (by making, for instance, minor editorial changes), all of which convey the same meaning, and their corresponding message digests. He then generates an equal number of variations of the target message to be substituted. - The probability that one of the variations of the innocuous message will match one of the variations of the target message is greater than 1/2 according to the birthday paradox. - The adversary then obtains Alice's signature on the variation of the innocuous message. - The signature from the innocuous message is removed and attached to the variation of the target message that generates the same message digest. The adversary has successfully forged the message without discovering the enciphering key. Pseudo-collisions are collisions for the compression function (see Question 2.1.6) that lies at the heart of an iterative hash function. While collisions for the compression function of a hash function might be useful in constructing collisions for the hash function itself, this is not normally the case. While pseudo-collisions might be viewed as an unfortunate property of a hash function, a pseudo-collision is not equivalent to a collision - the hash function may still be considered as reasonably secure, though its use for new applications tends to be discouraged in favor of pseudo-collision-free hash functions. MD5 (see Question 3.6.6) is one such example.
<urn:uuid:9be18f9c-0803-48c5-aa5f-d62efe067335>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/techniques-against-hash-functions.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879925
660
3.421875
3
New protocol gives serious speed to Web apps There are security concerns, but WebSockets can significantly ramp up Web applications - By Dan Rowinski - Mar 29, 2011 Who wouldn't want to speed up Web applications? IT administrators would jump at the chance to dramatically reduce data loads and latency times between clients and servers, and users wouldn't complain about faster applications and better performance. The technology is available. It is just not quite mature. WebSockets, a protocol within HTML5, functions by keeping a single Transmission Control Protocol port always open for bi-directional data transfers between a client and a server. That would enable faster and easier communication for Web applications than Hypertext Transfer Protocol (HTTP), which closes and reopens the port between client and server every time a communication is made. “What they're hoping to do with WebSocket is use Port 80 for bi-direction, full-duplex communications between a Web browser and a server,” said Tom Bridge, a partner at Technolutionary, an IT solutions company. “That means that fewer ports will have to be exposed, and you can do a lot more data interaction over Port 80.” W3C unveils Superman-like logo for HTML 5 Microsoft Rolls Out HTML 5 Test Site The system is not without its drawbacks, especially coming from an IT perspective. With WebSockets, Web applications function within a browser and provide a universal platform to couch communication with a server. In the HTTP environment, that communication can be very secure, as system administrators can control access to various applications and servers by limiting access to ports in the environment, like a dyke system in an irrigated field. Because WebSockets keeps the ports always open, there is less control by the systems administrator on what comes and goes, and what type of possibly malicious caches are hidden between the client and the server. “Many businesses block outbound traffic on non-standard ports, as a security (data loss) management technique, and this would permit a browser to act as intermediary, which means that blocking certain Web applications will no longer be possible at the port level, since WebSockets routes all traffic through the single open Web port,” Bridge said. Think about it in terms of the irrigated field. An always-open dyke creates flood of water, which could be a good thing for the crops. It also creates more of an opportunity for someone to sit on the edge of the stream and siphon off water (data) or poison it (malicious caches or applications) between the source and the destination. “You're not going to close Port 80 unless you're ready to cut your employees off from the Web entirely,” Bridge said. “Since WebSockets supports TLS (Transport Layer Security), it's also entirely possible to hide what that stream is doing, which tends to make some of the security guys pretty nervous since you can't see what it's doing or where it's going.” The major browsers all have the capability to support WebSockets protocols. But Firefox and Opera have disabled WebSockets until the vulnerability can be patched. Google’s Chrome and Apple Safari (and iOS 4.2.1 mobile Safari) support it, and Microsoft’s Internet Explorer provides functionality as an add-on. The protocol supports proxy servers WebSockets is well on its way to becoming certified by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF), which would give the protocols – ws:// for unencrypted and wss:// (WebSocket Secure) for encrypted – credence as tools to be developed upon. In an article at Technology Review, a magazine produced by the Massachusetts Institute of Technology, Ian Hickson, who's in charge of HTML5 specifications for Google, said that the protocol is very interesting for the company, which is obsessed with the speed of the Web. “Reducing kilobytes of data to 2 bytes ... and reducing latency from 150 milliseconds to 50 milliseconds is far more than marginal,” Hickson told Technology Review. "In fact, these two factors alone are enough to make WebSockets seriously interesting to Google. Google is a huge proponent of HTML5 and has a vested interest in making the protocol and all its functions, like WebSockets, a ubiquitous part of using the Internet. One company using WebSockets is a startup in Mountain View, Calif., called Kaazing, according to Technology Review. The company’s early customers are gambling companies for whom saving milliseconds on communications could be worth significant money. There is a clear use-case as well in financial markets, where speed is important. There are still pitfalls, but WebSockets could be an Internet-changing innovation. “Since it will work anywhere, WebSockets could be a game-changer,” Bridge said.
<urn:uuid:3a1f0af1-3d51-4b96-86c3-6b8b2c4fb74b>
CC-MAIN-2017-04
https://gcn.com/articles/2011/03/29/websockets-protocol-speeds-web-apps.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933458
1,014
2.5625
3
Black Box Explains...Advanced printer switches Matrix—A matrix switch is a switch with a keypad for selecting one of many input ports to connect to any one of many output ports. Port-Contention—A port-contention switch is an automatic electronic switch that can be serial or parallel. It has multiple input ports but only one output port. The switch monitors all ports simultaneously. When a port receives data, it prints and all the other ports have to wait. Scanning—A scanning switch is like a port-contention switch, but it scans ports one at a time to find one that’s sending data. Code-Operated—Code-operated switches receive a code (data string) from a PC or terminal to select a port. Matrix Code-Operated—This matrix version of the code-operated switch can be an any-port to any-port switch. This means than any port on the switch can attach to any other port or any two or more ports can make a simultaneous link and transfer data.
<urn:uuid:9108a110-c315-4778-93ee-723598025ba2>
CC-MAIN-2017-04
https://www.blackbox.com/en-ca/products/black-box-explains/black-box-explains-advanced-printer-switches
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867236
215
2.65625
3
LONGUEUIL, QUEBEC--(Marketwired - April 30, 2014) - Canadian Space Agency After more than ten years of studying the Universe, the Canadian Microvariability and Oscillation of STars (MOST) mission will come to an end on September 9, 2014, having exceeded its objectives. Since its launch in 2003, MOST has produced over one hundred science publications and provided astronomers with new insights into the behaviour of stars. Originally planned as a one-year project, MOST was extended annually due to the telescope's continued successes. The suitcase-sized telescope will leave a prolific legacy of data for astronomers to analyze. In the fall of 2013, the Canadian Space Agency (CSA) conducted a mission extension review in cooperation with members of Canada's astronomy community. The evaluation weighed the mission's ongoing operational costs against its objectives and new alternatives to obtain similar data. The review led to the recommendation that the mission be terminated, considering that MOST had already surpassed its objectives. MOST has helped a new generation of astronomers and space engineers advance their studies and research. Under the leadership of its Principal Investigator, Dr Jaymie Matthews of the University of British Columbia, the MOST science team currently includes members from: the University of British Columbia, the University of Toronto, Université de Montréal, St-Mary's University, the University of Vienna, Harvard University and NASA's Ames Research Center. • The space telescope will complete its current list of planned observation targets by September 9, 2014. • The CSA is working with the University of British Columbia, the National Research Council Canada, and the Université de Montréal to archive MOST's data at the Canadian Astronomy Data Center, thus making the mission's data available to the world's astronomy community for future use. • One of MOST's keys findings includes the surprising discovery that the star Procyon, the eighth brightest in the night sky, does not oscillate or vibrate-in astronomical terms, it has no pulse-challenging researchers' understanding of the life cycle of stars. • In 2011, MOST confirmed the existence of a suspected-and rather odd- exo-planet around the star 55 Cancri. This planet, called 55 Cancri e, is very close to the star, orbiting it in only 17 hours. The planet is about eight times denser than Earth, making it one of the densest exo-planets known. • Under the "My Own Space Telescope" public contest, the MOST mission offered amateur astronomers opportunities to observe targets of their choice. Canadian stargazers selected the red supergiant star Betelgeuse in the constellation of Orion and looked at matter around a quasar, which is a star-like object from outside our galaxy that emits large amounts of energy. • The prime contractor for MOST's satellite and ground station operations is Mississauga-based Microsat Systems Canada Inc. (MSCI). • Key stakeholders and partners included: the University of British Columbia, the University of Toronto Institute for Aerospace Studies's Space Flight Laboratory, as well as the Centre for Research in Earth and Space Technology (CRESTech) of Toronto, the Radio Amateur Satellite Corporation (AMSAT), which includes both Canadian and U.S. chapters, AeroAstro, Inc. of Ashburn, Virginia, Spectral Applied Research, Routes AstroEngineering, the Royal Astronomical Society of Canada (RASC), and Sumus Technologies. "Thanks to the Canadian Microvariability and Oscillation of STars telescope, Canadian astronomers have produced a decade's worth of astounding discoveries and Canada's space industry gained essential expertise. As MOST prepares for its retirement, I offer my congratulations to the talented team of astronomers and engineers on this Canadian science and technology success story." - General (Retired) Walter Natynczyk, CSA President Follow us on:
<urn:uuid:b01b2e76-ae5a-4bea-ab4f-65f25f4d4f66>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/canadas-most-astronomy-mission-comes-to-an-end-1905015.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00501-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896371
806
2.59375
3
Libelium recently launched its vehicle traffic monitoring platform that enables system integrators to create a real-time system that monitors vehicular and pedestrian traffic in cities and buildings. This new system uses Bluetooth and ZigBee double radio features to link up vehicles and people by Bluetooth devices in a given street, footpath, or roadway. The sensor data is then transferred by a multi-hop ZigBee radio, via an internet gateway, to a server. These types of traffic monitoring systems offer significant benefits. Reducing road congestion through real-time traffic warnings that will reduce journey times and wasteful emissions. Additionally, these systems could be introduced into other operations, such as, shopping centers, airports, and sports stadiums. The sensor data could be used to assess the suitability of emergency evacuation plans or identify "hot" pedestrian routes for marketing and advertising purposes. Other companies, such as TraffiCast are also offering traffic monitoring systems for travel time and average road speed information via their BlueToad technology. Again, the systems use media access control (MAC) protocols to detect mobile devices in vehicles such as mobile phones, headsets, and music players via Bluetooth technologies. The sensors are able to detect devices up to 50 meters, which more than covers a six lane roadway. Smart transportation has been a key agenda in some cities and countries for a while. In Japan, police are mandated by the government to provide traffic information and vehicle information and communication systems (VICS) have been in place since 1996. The traffic data is reported to the Japan Road Traffic Information Center (JARTIC), processed by Japan's VICS Center and broadcasted by radio beacons on expressways, infrared beacons on major city roads and highways, and FM multiplex transmitting via existing FM channels. It is estimated the world traffic system market was worth almost $1.3 billion in 2011 and the market is projected to increase steadily over the next five years. ABI Research's latest reports on Smart Cities and Traffic Information Systems, provide addition detail.
<urn:uuid:947c7fa4-c2f1-47c0-8515-2fe18cf6b759>
CC-MAIN-2017-04
https://www.abiresearch.com/blogs/libeliums-vehicle-traffic-monitoring-platform/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944929
406
2.515625
3
This resource is no longer available How To Secure Online Activities The Internet is not automatically a secure or safe place to be. It is important to be clear and distinct when discussing security. Security is not a singular concept, solution, or state. It is a combination of numerous aspects, implementations, and perspectives. In fact, security is usually a relative term with graded levels, rather than an end state that can be successfully achieved. In other words, a system is not secure; it is always in a state of being secured. There are no systems that cannot be compromised. However, if one system's security is more daunting to overcome than another's, then attackers might focus on the system that is easier to compromise.
<urn:uuid:408d7739-9f67-4892-835a-a57e6c0420eb>
CC-MAIN-2017-04
http://www.bitpipe.com/detail/RES/1377097047_848.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00041-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95074
145
3.296875
3
By Vivek Gautam, Sr. Research Analyst, South Asia & Middle East, Environmental & Building Technologies Practice Scientific management of solid waste is a grave challenge faced by most modern societies. In the gulf region, where most countries have highest per capita waste generation across the world, the scale of the challenge faced by civic authorities is even bigger. Fast-paced industrial growth, recent construction boom, increasing population & rapid urbanization, and vastly improved lifestyle & unsustainable consumption pattern have all contributed to this burgeoning waste problem. Preliminary estimates put the total volume of solid waste generated in the GCC region at around 120 million tons per year. A huge proportion of this is expected to be the waste generated from construction and demolition activities; municipal waste is the second largest waste category by source. In December 1997, GCC countries adopted a uniform waste management system and a monitoring mechanism for waste production, collection, sorting, treatment and disposal. Most of the waste management regulations and strategies adopted are based on universally accepted scientific approach enumerated in Integrated Waste Management Hierarchy. However, the hurdle lies in effective implementation. A look at the composition of Municipal Solid Waste in these countries suggests that it is largely decomposable and recyclable. However, at present waste disposal into landfills remains the widely practiced method. In countries such as Kuwait and Bahrain where limited land is available, this doesn’t seem to be most prudent option. There is need to encourage composting, recycling and incineration of waste in the region. Also the pace of waste management infrastructure development has been lagging the rate at which per capita waste generation has gone up.
<urn:uuid:d347854a-370e-4eb2-b2ab-ee305cf675d9>
CC-MAIN-2017-04
https://www.frost.com/sublib/display-market-insight.do?id=186566927
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00041-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924432
329
3.3125
3
Singh G.,Wildlife Institute of India | Rawat M.S.,National Medicinal Plants Board | Pandey D.,DFO | Rawat G.S.,Wildlife Institute of India Medicinal Plants | Year: 2011 The medicinal plants in traditional healthcare practices are providing clues to new areas of research and are well recognized in biodiversity conservation. Traditional knowledge has been the driving force for many basic scientific developments. However, the information on the uses of various plants for medicine is lacking from many interior areas of western Himalaya. Keeping this in view, a survey was conducted to explore the diversity of medicinal plants, their status in the wild and uses by the local communities for curing various ailments, situated in the fringes of Kedarnath Wildlife Sanctuary, Uttarakhand. Study revealed that more than 46 plant species out of 137 species of medicinal values recorded from the region are commonly used by the local people for their traditional health care system viz., skin diseases, dysentery, cough, fever, wounds, female disorders, joint pain, gastric problems, nasal bleeding, cold, piles, anti poison, ear problems, eye problems, stones and rheumatism. Source Peck M.A.,University of Hamburg | Neuenfeldt S.,Technical University of Denmark | Essington T.E.,University of Washington | Trenkel V.M.,French Research Institute for Exploitation of the Sea | And 9 more authors. ICES Journal of Marine Science | Year: 2014 Forage fish (FF) have a unique position within marine foodwebs and the development of sustainable harvest strategies for FF will be a critical step in advancing and implementing the broader, ecosystem-based management of marine systems. In all, 70 scientists from 16 nations gathered for a symposium on 12-14 November 2012 that was designed to address three key questions regarding the effective management of FF and their ecosystems: (i) how do environmental factors and predator-prey interactions drive the productivity and distribution of FF stocks across ecosystems worldwide, (ii) what are the economic and ecological costs and benefits of different FF management strategies, and (iii) do commonalities exist across ecosystems in terms of the effective management of FF exploitation? © 2013 International Council for the Exploration of the Sea. Published by Oxford University Press. All rights reserved. Source News Article | March 25, 2016 The environmental community has been watching Justin Trudeau’s Liberals closely, to see how they live up to their promise to give Canada a low carbon, climate resistant economy. The new government’s performance at COP 21 was nothing less than stellar. While the Federal government’s meeting with the provinces in Vancouver failed to achieve much beyond an agreement that carbon will be priced, the herd is now moving. News from the environmental assessment front is less encouraging: the National energy Board’s flawed Trans Mountain Pipline Expansion hearings are continuing and Catherine McKenna appears to have just rubber stamped the Woodfibre LNG project. So what does Canada’s budget say about the environment? Erin Flanagan, of the Pembina Institute, described the allocations to the National Energy Board and Canadian Environmental Assessment Agency (CEAA) “something to be optimistic about.” “That’s really important because we need our regulators to have the financial means to be able to interact directly with Canadians who have an opinion on projects and make sure they are tracking these project proponents and living up to the environmental assessment standards and environmental laws of the country.” “I think this government has heard the message that the existing regulatory structures are not serving the public interest and so they have injected some money into both of those structures so they can do their jobs,” she said. Elizabeth May pointed out that this is not enough. The CEAA’s $14.2 million allotment is to carry out the directives that former Prime Minister Harper brought in when he gutted Canada’s Environmental Assessment Act: Ms May described the new government’s environmental approach as a vast improvement over Harper’s, but does not match up to the standards former Liberal Finance Minister Ralph Goodale set in 2005: “The 2005 budget offered a fully formed climate action plan, including eco-energy rebates for homeowners, substantial funding for provinces to act to address the climate challenge, rebates for the purchase of energy efficient vehicles, and a carbon pricing scheme through a complicated carbon credit approach. The 2016 budget contains none of these measures. “Disturbingly, the budget cites the target of the Paris Agreement as avoiding 2 degrees Celsius global average temperature increase, when it was Canadian leadership that helped drive the world to the more ambitious goal of striving to hold temperature to no more than 1.5 degrees C,” Ms. May said. “The Liberal platform promised carbon pricing, which we did not expect to see today given the negotiations with the premiers. It also promised to reduce subsidies to fossil fuels by $125 million in 2017-18. No changes have yet been made to fossil fuel subsidies and subsidies to LNG are specifically continued until the end of 2024,” said Ms. May.” Kai Chan, an associate professor at University of British Columbia and one of the 130 scientists who recently condemned the flawed review process for Pacifc Northwest’s proposed LNG terminal on Lelu Island, said Elizabeth May raised some important points. “I don’t know if CEAA ever had the capacity to do their own analysis. I think they have relied on their own proponents the whole time, but their ability to critique and ensure the rigor of the analysis handed to them by the proponent has been curtailed. It has been getting worse and worse because of the cuts. They are very understaffed,” he said. “My biggest concern, and I can’t find and details on this yet, is all the major science-based guidance within the Federal agencies (CEAA, the DFO, Canada Parks) have all been hit quite hard because of budget restrictions. They have been short staffed for years (and have suffered from) reduced research funding; slashed travel budgets; travel restrictions. I don’t know where to see if those have been restored. I think that’s really important.” Chan described the funding for parks as “important initiatives, it’s not huge but more than we saw with the previous government.” Clare Demerse, of Clean Energy Canada, [5. Roy L Hales Interview with Clare Demerse, of Clean Energy Canada] found it encouraging to see that the Government was providing funding to write environmental regulations and update building codes etc. “We have a lot of catching up to do because this was not a priority under the previous Government. So it is a really important signal to say okay the budget is there, people can get down to work in Environment Canada and Natural Resources Canada, and other parts of the government, and really focus on climate action and clean energy as a priority,” she said. One of the brightest sections of this budget is the attention given to the clean tech sector, which is a key component of building a more environmentally friendly future. “We were quite pleased with the way the budget treats clean energy more broadly. We thought that it makes some smart investments, and the government made it clear it sees it as an economic opportunity for Canada,” said Demerse. “You can see that in a couple of ways, one being the Finance Ministers speech to the House. He talked about it as ‘the future the World is tending to and we want Canada to lead in that future.’ And then also the fact clean energy is really sprinkled throughout the budget. It wasn’t just a few pages in an environmental section. You can read about it in all kinds of parts of the budget. whether you were talking about infrastructure, government procurement, or space for people in overseas missions for people trying to promote exports of clean technology.” Kai Chan agreed, “Clean energy is a major component of the budget and certainly a major component of how they are representing it.” Some of the specifics include: Chan pointed to the breakdown of investments in public transit on page 92 of the budget, and noted it was based on the province’s existing share of public ridership. “Basically, where public transit is already helping many people, they will help it help more people,” he said. “Overall, we were thinking of this as a downpayment,” said Demerse. “We know that if all goes well, next year the Prime Minster and Premiers will have an agreement on a National Climate Plan, and in Vancouver they agreed it will be ready to implement in 2017. So next year’s budget is probably going to be one where the federal government is probably going to have to play a very important role. So in next year’s budget we will be looking for things like a national carbon pricing, or support for low carbon infrastructure.” Based on the comments above, I would give this budget an “A” for effort but a barely passing overall grade because of its failure to address the damages the previous administration made to Canada’s environmental protections (specifically, Bill C-38). That said, this budget shows a marked improvement over those we have seen in the past decade. If the “interim measures” are replaced by more socially and environmentally sensitive legislation, there is good reason to be optimistic about the future. Photo Credits: Parliament, Ottawa by mark.watmough via Flickr (CC BY SA, 2.0 License); Erin Flanagan – Courtesy Pembina Institute; Elizabeth May, MP Saanich-Gulf Islands – Courtesy B.C. Green Party; Kai Chan, Associate Professor in the Institute for Resources, Environment and Sustainability, UBC; graph from Budget; Clare Demerse of Clean Tech Canada; Two graphs from the Budget Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.” Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter. Trenkel V.M.,French Research Institute for Exploitation of the Sea | Huse G.,Norwegian Institute of Marine Research | MacKenzie B.R.,National Institute of Aquatic Resources DTU Aqua | MacKenzie B.R.,Technical University of Denmark | And 19 more authors. Progress in Oceanography | Year: 2014 This paper reviews the current knowledge on the ecology of widely distributed pelagic fish stocks in the North Atlantic basin with emphasis on their role in the food web and the factors determining their relationship with the environment. We consider herring (Clupea harengus), mackerel (Scomber scombrus), capelin (Mallotus villosus), blue whiting (Micromesistius poutassou), and horse mackerel (Trachurus trachurus), which have distributions extending beyond the continental shelf and predominantly occur on both sides of the North Atlantic. We also include albacore (Thunnus alalunga), bluefin tuna (Thunnus thynnus), swordfish (Xiphias gladius), and blue marlin (Makaira nigricans), which, by contrast, show large-scale migrations at the basin scale. We focus on the links between life history processes and the environment, horizontal and vertical distribution, spatial structure and trophic role. Many of these species carry out extensive migrations from spawning grounds to nursery and feeding areas. Large oceanographic features such as the North Atlantic subpolar gyre play an important role in determining spatial distributions and driving variations in stock size. Given the large biomasses of especially the smaller species considered here, these stocks can exert significant top-down pressures on the food web and are important in supporting higher trophic levels. The review reveals commonalities and differences between the ecology of widely distributed pelagic fish in the NE and NW Atlantic basins, identifies knowledge gaps and modelling needs that the EURO-BASIN project attempts to address. © 2014 Elsevier Ltd. Source News Article | March 11, 2016 They come from the West Coast, as far south as California, as north as Alaska, and as east as the Atlantic coast. Their joint letter refers to “Misrepresentation,” “lack of information,” and “Disregard for science that was not funded by the proponent.” Scientists condemn the flawed review process for Lelu Island, at the mouth of British Columbia’s Skeena River, as “a symbol of what is wrong with environmental decision-making in Canada.” More than 130 scientists signed on to this letter. “This letter is not about being for or against LNG, the letter is about scientific integrity in decision-making,” said Dr. Jonathan Moore, Liber Ero Chair of Coastal Science and Management, Simon Fraser University. One of the other signatories is Otto Langer, former Chief of Habitat Assessment at Department of Fisheries and Oceans (DFO), who wrote: These are tough words for a Federal government that promised to put teeth back in the gutted environmental review process. In Prime Minister Justin Trudeau’s defense, this is yet another problem he inherited from the previous administration, and the task of cleaning up this mess seems enormous. That said, this government was aware the environmental review process was broken before it was elected and has not intervened to at least stop the process from moving forward until it is prepared to take action. The Liberal Government appears to be facing a tough decision. So far, it has attempted to work with the provinces. On Lelu Island, as well as the equally controversial proposed Kinder Morgan Pipeline expansion and Site C Dam project, continuing to support Premier Clak’s policies in this manner would appear to necessitate betraying the trust of the Canadian people. Here are a few choice excerpts from the public letter that more than 130 scientists sent to Catherine McKenna and Prime Minister Trudeau: ” … The CEAA draft report has not accurately characterized the importance of the project area, the Flora Bank region, for fish. The draft CEAA report1 states that the “…marine habitats around Lelu Island are representative of marine ecosystems throughout the north coast of B.C.”. In contrast, five decades of science has repeatedly documented that this habitat is NOT representative of other areas along the north coast or in the greater Skeena River estuary, but rather that it is exceptional nursery habitat for salmon2-6 that support commercial, recreational, and First Nation fisheries from throughout the Skeena River watershed and beyond7. A worse location is unlikely to be found for PNW LNG with regards to potential risks to fish and fisheries….” ” … CEAA’s draft report concluded that the project is not likely to cause adverse effects on fish in the estuarine environment, even when their only evidence for some species was an absence of information. For example, eulachon, a fish of paramount importance to First Nations and a Species of Special Concern8, likely use the Skeena River estuary and project area during their larval, juvenile, and adult life-stages. There has been no systematic study of eulachon in the project area. Yet CEAA concluded that the project posed minimal risks to this fish…” ” … CEAA’s draft report is not a balanced consideration of the best-available science. On the contrary, CEAA relied upon conclusions presented in proponent-funded studies which have not been subjected to independent peer-review and disregarded a large and growing body of relevant independent scientific research, much of it peer-reviewed and published…” ” …The PNW LNG project presents many different potential risks to the Skeena River estuary and its fish, including, but not limited to, destruction of shoreline habitat, acid rain, accidental spills of fuel and other contaminants, dispersal of contaminated sediments, chronic and acute sound, seafloor destruction by dredging the gas pipeline into the ocean floor, and the erosion and food-web disruption from the trestle structure. Fisheries and Oceans Canada (DFO) and Natural Resources Canada provided detailed reviews12 on only one risk pathway – habitat erosion – while no such detailed reviews were conducted on other potential impacts or their cumulative effects…” ” … CEAA’s draft report concluded that the project posed moderate risks to marine fish but that these risks could be mitigated. However, the proponent has not fully developed their mitigation plans and the plans that they have outlined are scientifically dubious. For example, the draft assessment states that destroyed salmon habitat will be mitigated; the “proponent identified 90 000 m2 of lower productivity habitats within five potential offsetting sites that could be modified to increase the productivity of fisheries”, when in fact, the proponent did not present data on productivity of Skeena Estuary habitats for fish at any point in the CEAA process. Without understanding relationships between fish and habitat, the proposed mitigation could actually cause additional damage to fishes of the Skeena River estuary…” British Columbia Institute of Technology 1. Marvin Rosenau, Ph.D., Professor, British Columbia Institute of Technology. 2. Eric M. Anderson, Ph.D., Faculty, British Columbia Institute of Technology. British Columbia Ministry of Environment 1. R. S. Hooton, M.Sc., Former Senior Fisheries Management Authority for British Columbia Ministry of Environment, Skeena Region. California Academy of Sciences 1. John E. McCosker, Ph.D., Chair of Aquatic Biology, Emeritus, California Academy of Sciences. Department of Fisheries and Oceans Canada 1. Otto E. Langer, M.Sc., R.P.Bio., Fisheries Biologist, Former Chief of Habitat Assessment, Department of Fisheries and Oceans Canada Memorial University of Newfoundland 1. Ian A. Fleming, Ph.D., Professor, Memorial University of Newfoundland. 2. Brett Favaro, Ph.D., Liber Ero conservation fellow, Memorial University of Newfoundland. Norwegian Institute for Nature Research 1. Rachel Malison, Ph.D., Marie Curie Fellow and Research Ecologist, The Norwegian Institute for Nature Research. Russian Academy of Science 1. Alexander I. Vedenev, Ph.D., Head of Ocean Noise Laboratory, Russian Academy of Science 2. Victor Afanasiev, Ph.D., Russian Academy of Sciences. Sakhalin Research Institute of Fisheries and Oceanography 1. Alexander Shubin, M.Sc. Fisheries Biologist, Sakhalin Research Institute of Fisheries and Oceanography. Simon Fraser University, BC 1. Jonathan W. Moore, Ph.D., Liber Ero Chair of Coastal Science and Management, Associate Professor, Simon Fraser University. 2. Randall M. Peterman, Ph.D., Professor Emeritus and Former Canada Research Chair in Fisheries Risk Assessment and Management, Simon Fraser University. 3. John D. Reynolds, Ph.D., Tom Buell BC Leadership Chair in Salmon Conservation, Professor, Simon Fraser University 4. Richard D. Routledge, Ph.D., Professor, Simon Fraser University. 5. Evelyn Pinkerton, Ph.D., School of Resource and Environmental Management, Professor, Simon Fraser University. 6. Dana Lepofsky, Ph.D., Professor, Simon Fraser University 7. Nicholas Dulvy, Ph.D., Canada Research Chair in Marine Biodiversity and Conservation, Professor, Simon Fraser University. 8. Ken Lertzman, Ph.D., Professor, Simon Fraser University. 9. Isabelle M. Côté, Ph.D., Professor, Simon Fraser University. 10. Brendan Connors, Ph.D., Senior Systems Ecologist, ESSA Technologies Ltd., Adjunct Professor, Simon Fraser University. 11. Lawrence Dill, Ph.D., Professor Emeritus, Simon Fraser University. 12. Patricia Gallaugher, Ph.D., Adjunct Professor, Simon Fraser University. 13. Anne Salomon, Ph.D., Associate Professor, Simon Fraser University. 14. Arne Mooers, Ph.D., Professor, Simon Fraser University. 15. Lynne M. Quarmby, Ph.D., Professor, Simon Fraser University. 16. Wendy J. Palen, Ph.D., Associate Professor, Simon Fraser University. University of Alaska 1. Peter Westley, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 2. Anne Beaudreau, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 3. Megan V. McPhee, Ph.D., Assistant Professor, University of Alaska Fairbanks. University of Alberta 1. David.W. Schindler, Ph.D., Killam Memorial Professor of Ecology Emeritus, University of Alberta. 2. Suzanne Bayley, Ph.D., Emeritus Professor, University of Alberta. University of British Columbia 1. John G. Stockner, Ph.D., Emeritus Senior Scientist DFO, West Vancouver Laboratory, Adjuct Professor, University of British Columbia. 2. Kai M.A. Chan, Ph.D., Canada Research Chair in Biodiversity and Ecosystem Services, Associate Professor, University of British Columbia 3. Hadi Dowlatabadi, Ph.D., Canada Research Chair in Applied Mathematics and Integrated Assessment of Global Change, Professor, University of British Columbia 4. Sarah P. Otto, Ph.D., Professor and Director, Biodiversity Research Centre, University of British Columbia. 5. Michael Doebeli, Ph.D., Professor, University of British Columbia. 6. Charles J. Krebs, Ph.D., Professor, University of British Columbia. 7. Amanda Vincent, Ph.D., Professor, University of British Columbia. 8. Michael Healey, Ph.D., Professor Emeritus, University of British Columbia. University of California (various campuses) 1. Mary E. Power, Ph.D., Professor, University of California, Berkeley 2. Peter B. Moyle, Ph.D., Professor, University of California. 3. Heather Tallis, Ph.D., Chief Scientist, The Nature Conservancy, Adjunct Professor, University of California, Santa Cruz. 4. James A. Estes, Ph.D., Professor, University of California. 5. Eric P. Palkovacs, Ph.D., Assistant Professor, University of California-Santa Cruz. 6. Justin D. Yeakel, Ph.D., Assistant Professor, University of California. 7. John L. Largier, Ph.D., Professor, University of California Davis. University of Montana 1. Jack A. Stanford, Ph.D., Professor of Ecology, University of Montana. 2. Andrew Whiteley, Ph.D., Assistant Professor, University of Montana. 3. F. Richard Hauer, Ph.D., Professor and Director, Center for Integrated Research on the Environment, University of Montana. University of New Brunswick 1. Richard A. Cunjak, Ph.D., Professor, University of New Brunswick. University of Ontario Institute of Technology 1. Douglas A. Holdway, Ph.D., Canada Research Chair in Aquatic Toxicology, Professor, University of Ontario Institute of Technology. University of Ottawa 1. Jeremy Kerr, Ph.D., University Research Chair in Macroecology and Conservation, Professor, University of Ottawa University of Toronto 1. Martin Krkosek, Ph.D., Assistant Professor, University of Toronto. Gail McCabe, Ph.D., University of Toronto. University of Victoria 1. Chris T. Darimont, Ph.D., Associate Professor, University of Victoria 2. John Volpe, Ph.D., Associate Professor, University of Victoria. 3. Aerin Jacob, Ph.D., Postdoctoral Fellow, University of Victoria. 4. Briony E.H. Penn, Ph.D., Adjunct Professor, University of Victoria. 5. Natalie Ban, Ph.D., Assistant Professor, School of Environmental Studies, University of Victoria. 6. Travis G. Gerwing, Ph.D., Postdoctoral Fellow, University of Victoria. 7. Eric Higgs, Ph.D., Professor, University of Victoria. 8. Paul C. Paquet, Ph.D., Senior Scientist, Raincoast Conservation Foundation, Adjunct Professor, University of Victoria. 9. James K. Rowe, Ph.D., Assistant Professor, University of Victoria. University of Washington 1. Charles Simenstad, Ph.D., Professor, University of Washington. 2. Daniel Schindler, Ph.D., Harriet Bullitt Endowed Chair in Conservation, Professor, University of Washington. 3. Julian D. Olden, Ph.D., Associate Professor, University of Washington. 4. P. Sean McDonald, Ph.D., Research Scientist, University of Washington. 5. Tessa Francis, Ph.D., Research Scientist, University of Washington. University of Windsor 1. Hugh MacIsaac, Ph.D., Canada Research Chair Great Lakes Institute for Environmental Research, Professor, University of Windsor. Photo Credits: 9 of the scientist condemning the CEAA review are professors at the University of Victoria. Photo shows U Vic students listening to a UN official in 2012 by Herb Neufeld via Flickr (CC BY SA, 2.0 License); Screen shot from a Liberal campaign video in which Trudeau promised to bring real change to Ottawa;8 of the scientist condemning the CEAA review are professors at the University of British Columbia. Photo of UBC by abdallahh via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the University of Washington. Photo is Mary Gates Hall, in the University of Washington by PRONam-ho Park Follow via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the Skeena Fisheries Commission. Photo is Coast mountains near the mouth of the Skeena River by Roy Luck via Flickr (CC BY SA, 2.0 License);16 of the scientists condemning the CEAA review were professors at Simon Fraser University. Photo shows SFU’s Reflective Pool by Jon the Happy Web Creative via Flickr (CC BY SA, 2.0 License) Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.” Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
<urn:uuid:4859a52f-e76a-4177-b5a0-1b12a40e3576>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/dfo-1639043/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00492-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922664
5,646
2.84375
3
Publically available data that could be aggregated and used by intelligent systems to predict future events is out there, if you can harness the technology to utilize it. That's one of the driving ideas behind a program that the Intelligence Advanced Research Projects Activity (IARPA) group will detail at a Proposer's Day conference in Washington, DC next month. More interesting stories: The weirdest, wackiest and stupidest sci/tech stories of 2010 The program, known as the Open Source Indicators (OSI) will aim to "develop methods for continuous, automated analysis of publicly available data in order to anticipate and/or detect societal disruptions, such as political crises, disease outbreaks, economic instability, resource shortages, and natural disasters," IARPA stated. According to the agency: "Many significant societal events are preceded and/or followed by population-level changes in communication, consumption, and movement. Some of these changes may be indirectly observable from publicly available data, such as web search trends, blogs, microblogs, internet traffic, webcams, financial markets, and many others. Published research has found that many of these data sources are individually useful in the early detection of events such as disease outbreaks and macroeconomic trends. However, little research has examined the value of combinations of data from diverse sources." "The OSI Program will aim to develop methods that "beat the news" by fusing early indicators of events from multiple data sources and types. Anticipated innovations include: development of empirically-driven sociological models for population behavior change in anticipation of, and response to, events of interest; collection and processing of publicly available data that represent those population behavior changes; development of data extraction techniques that focus on volume, rather than depth, by identifying shallow features of data that correlate with events..." IARPA stated. According to IARPA, OSI will not fund research on US events, the identification or movement of specific individuals, collection mechanisms that require directed participation by individuals, or advanced natural language processing. It is expected that performers will use existing, off-the-shelf technologies to extract features of interest in publicly available data, and that research will focus on methods for correlating combinations of data with events, the group stated. "Collaborative efforts and teaming among potential performers will be encouraged. It is anticipated that teams will be multidisciplinary, and might include social scientists, mathematicians, statisticians, computer scientists, content extraction experts, and information theorists. IARPA anticipates that academic institutions and companies from around the world will participate in this program. Researchers will be encouraged to publish their findings in academic journals." IARPA has a number of interesting ongoing operations., You may recall that the agency in May said it wanted to build a repository of metaphors. Not just American/English metaphors mind you but those of Iranian Farsi, Mexican Spanish and Russian speakers. In the end the program should produce a methodology, tools and techniques together with a prototype system that will identify metaphors that provide insight into cultural beliefs. It should also help build structured framework that organizes the metaphors associated with the various dimensions of an analytic problem and build a metaphor repository where all metaphors and related information are captured for future reference and access, IARPA stated. IARPA also runs the Automated Low-Level Analysis and Description of Diverse Intelligence Video (ALADDIN) program which looks to build and analyze what it calls open source video clips. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:23730cb7-4762-418c-b454-d8af7021ec4f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2220123/security/us-intelligence-agency-wants-technology-to-predict-the-future-from-public-events.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92876
733
3
3
New help on testing for common cause of software bugs NIST releases a tutorial on automated testing of multiple variables - By William Jackson - Nov 01, 2010 The National Institute of Standards and Technology has developed algorithms for automated testing of the multiple variables in software that can cause security faults, and has released a tutorial for using the tools. The improper or unexpected interaction of two or more parameters in a piece of software, such as inputs or configuration settings, is a significant cause of security bugs. But testing for these problems has been limited by the cost and complexity of testing the huge number of possible combinations. NIST in 2003 reported that such problems cost the U.S. economy more than $59 billion a year despite the fact that more than half of most software development budgets went toward testing. Research has shown that in many cases the large majority of such faults, from 89 to 100 percent, are caused by combinations of no more than four variables, and virtually all are caused by no more than six, NIST has reported. NIST test puts software analysis tools through their paces “This finding has important implications for testing because it suggests that testing combinations of parameters can provide highly effective fault detection,” NIST said in the tutorial,“Practical Combinatorial Testing, Special Publication 800-142.” Testing pairs of variables, although practical, can miss from 10 percent to 40 percent of system bugs, NIST said. But a lack of good algorithms for testing higher numbers of variables at a time has made such testing impracticably expensive, and is not used except for high-assurance software for mission-critical applications. The Automated Combinatorial Testing for Software program is a cooperative effort by NIST, the Air Force, the University of Texas at Arlington, George Mason University, Utah State University, the University of Maryland and North Carolina State University to produce methods and tools to generate tests for any number of variable combinations. SP 800-142 offers instructions for their use. The new algorithms and tools make automated testing for relatively small combinations of variables practical, but combinatorial testing is not cost-free. The NIST publication provides information on the costs and practical considerations for each type of testing, and explains tradeoffs and limitations. William Jackson is a Maryland-based freelance writer.
<urn:uuid:f292de5c-2701-434e-ade4-fd01098e378c>
CC-MAIN-2017-04
https://gcn.com/articles/2010/11/01/nist-software-testing-tools.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921568
474
2.578125
3
Researchers Warn of Serious SSH Flaws The new flaws, found in several implemenations of the SSHv2 protocol, are especially dangerous in that they occur before authentication takes place.Security researchers have discovered a set of vulnerabilities in several vendors implementations of the SSHv2 protocol that could give an attacker the ability to execute code on remote machines. The new flaws are especially dangerous in that they occur before authentication takes place. The SSH (secure shell) protocol is a transport layer protocol that enables clients to connect securely to a remote server. Its often used for remote administration purposes. Although the results of exploiting one of these vulnerabilities varies by vendor and vulnerability, attackers could, in some cases, run code on remote machines or launch denial-of-service attacks. Rapid 7 Inc., the New York-based security company that found the vulnerabilities, only tested SSHv2 implementations but said that some SSHv1 implementations may be vulnerable as well. Most of the flaws involve memory access violations and all of them are found in the greeting and key-exchange phase of the SSH transmission. Among the vendors whose products are vulnerable are SSH Communications Security Inc., F-Secure Corp., InterSoft International Inc., and several others. However, both SSH Communications and F-Secure say that the vulnerabilities are not exploitable in their software.
<urn:uuid:cd213167-ba8d-4a80-b83b-779433919050>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Networking/Researchers-Warn-of-Serious-SSH-Flaws
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935821
265
2.515625
3
Applicable Version: 10.00 onwards Denial of Service (DoS) A Denial of Service (DoS) attack is an attempt to make a machine or network resource unavailable to its intended users. One common method of attack involves saturating the target machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered essentially unavailable. DoS Attacks can be carried out in the following ways: ICMP Flood: In such an attack, the perpetrators send large numbers of IP packets with the source address faked to appear to be the address of the victim. The network's bandwidth is quickly used up, preventing legitimate packets from getting through to their destination. SYN/TCP Flood: A SYN flood occurs when a host sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet (Acknowledge), and waiting for a packet in response from the sender address (response to the ACK Packet). However, because the sender address is forged, the response never comes. These half-open connections saturate the number of available connections the server is able to make, keeping it from responding to legitimate requests until after the attack ends. UDP Flood: A UDP flood attack can be initiated by sending a large number of UDP packets to random ports on a remote host. For a large number of UDP packets, the victimized system will be forced into sending many ICMP packets, eventually leading it to be unreachable by other clients. Distributed Denial of Service (DDoS) A Distributed Denial of Service (DDoS) attack is the attack where multiple legitimate or compromised systems perform a DoS Attack to a single target or system. This distributed attack can compromise the victim machine or force it to shutdown, which in turn bars service to its legitimate users. This article describes how you can protect your network against DoS and DDoS attacks using Cyberoam. It is divided into Two (2) sections, namely: Protecting from DoS Attack You can protect your network against DoS attacks both for IPv4 and IPv6 traffic by configuring appropriate DoS Settings on Cyberoam. You can configure DoS Settings by following the steps given below. · Login to Cyberoam Web Admin Console using profile having read-write administrative rights over relevant features. · Go to Firewall > DoS > Settings, set the given parameters as appropriate to your network traffic and check Apply Flag against the configured parameter to enable scanning for the respective type of traffic. For example, here we have set Packet Rate per Source (Packet/min) as 1200 for ICMP/ICMPv6 Flood and checked Apply Flag against it to enable scanning for ICMP and ICMPv6 traffic. · Click Apply to apply the configured DoS Settings. Once DoS settings are applied, Cyberoam keeps a check on the network traffic to ensure that it does not exceed the configured limit. For example, once above settings are applied, Cyberoam scans the network traffic for ICMP and ICMPv6 packets. If the number of ICMP/ICMPv6 packets from a particular source exceeds 1200 per minute, Cyberoam drops the excessive packets and continues dropping till the attack subsides. Protecting from DDoS Attack You can protect your network against DDoS attacks using IPS policies in Cyberoam. To configure IPS policy, follow the steps given below. · Login to Cyberoam Web Admin Console using profile having read-write Administrative rights over relevant features. · Go to IPS > Policy > Policy and click Add to create a new IPS Policy named ‘DDoS_Protection’. · Select the newly-made policy and click Add to add Rule for the IPS Policy. · Click Select Individual Signature and search for DDoS signatures. · Select the DDoS signatures and select Action as Drop Packet. Click OK to save the Rule. · Click OK to save policy. · Go to Firewall > Rule > Rule and apply the policy on the required Firewall Rule. Here, we have applied it on LAN_WAN_LiveUserTraffic. Click OK to save the firewall settings. Once the IPS policy is applied, Cyberoam keeps a lookout for any packets that match the configured IPS signature(s). If any such packets are found, Cyberoam drops them. Document Version: 1.1 – 2 May, 2014
<urn:uuid:5a87e5b4-966f-4947-8641-2f2c5d502f92>
CC-MAIN-2017-04
https://kb.cyberoam.com/default.asp?id=2605
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869412
954
3.46875
3
A big data tool developed by IBM in partnership with Deutsches Elektronen-Synchrotron (DESY) will enable scientists around the world to more quickly manage and share massive volumes of x-ray data produced by a super microscope in Germany. German research center DESY develops, builds and operates large particle accelerators used to investigate the structure of matter. One of these accelerators, the 1.7 mile-long PETRA III, speeds up electrically charged particles to nearly the speed of light – about 186,000 miles per second – and sends them onto a magnetic track to generate remarkably brilliant x-rays, known as synchrotron radiation. About 2,000 scientists a year use the instrument to study the atomic structure of novel semiconductors, catalysts, biological cells and other materials. This translates to huge volumes of x-ray data. The challenge prompted IBM and DESY to implement a Big Data and Analytics architecture using IBM’s software-defined technology, codenamed Elastic Storage, which can handle more than 20 gigabytes of data per second at peak performance. “A typical detector generates a data stream of about 5 Gigabit per second, which is about the data volume of one complete CD-ROM per second,” said Dr. Volker Gülzow, head of DESY IT. “And at PETRA III we do not have just one detector, but 14 beamlines equipped with many detectors, and they are currently being extended to 24. All this Big Data must be stored and handled reliably.” Elastic Storage is described as a scalable, high-performance data and file management solution, based upon General Parallel File System (GPFS) technology, that offers: - Enhanced security – native encryption and secure erase, NIST SP 800-131A encryption compliance. - Increased performance – Server-side Elastic Storage Flash caches increase IO performance up to 6X. - Improved usability – data migration; AFM, FPO, and backup/restore enhancements; reliability, availability and serviceability enhancements. The storage architecture empowers geographically distributed workflows by placing critical data close to everyone and everything that needs it, no matter where they are in the world. IBM reports the technology will enable DESY to provide analysis-as-a-service and cloud-based solutions to its worldwide user base. A plan is also in the works to expand the big data architecture to support the European x-ray free electron laser (European XFEL), an x-ray research laser facility currently under construction that is scheduled to start operation in 2017. “We expect about 100 Petabyte per year from the European XFEL,” said Dr. Gülzow. That is on par with the data volume produced at the world’s largest particle accelerator, the Large Hadron Collider (LHC) at CERN in Geneva.
<urn:uuid:7bc0d090-b27a-4e70-b73b-89c8aa73d7e9>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/25/ibm-applies-elastic-storage-big-data-challenge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00548-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899686
602
2.9375
3
Identifying a Skills Gap in the Workforce The gulf between the capabilities of a collective workforce and the level of aptitude an employer demands is known as a “skills gap” in the labor market. Serious problems arise when a workforce’s proficiency cannot keep pace with economic development. In an increasingly interconnected and global business environment, one lagging sector can drag down several others. So, what does this have to do with credentialing programs? The role of certification—as with education and training in general—is to prepare people for the challenges of the world. Certifications don’t exist for their own sake. They exist to give employees and employers a reliable and effective way to supply people with a standard set of skills. This usually entails identifying a skills gap within a discipline. In the certification universe, there are a myriad of offerings for skills, job roles and disciplines, from school teachers to accountants to information technology. The National Commission for Certifying Agencies (NCCA), which is part of the National Organization for Competency Assurance (NOCA), promotes 21 standards for administering premium certifications. “The NCCA standards are a blueprint—almost a business plan in some ways—for how to build a quality certification program,” said Wade Delk, executive director of NOCA. “Whether you intend to be accredited by the NCCA or not, if you follow the standards to the best of your ability, you’re going to create a very high-quality certification program.” Delk said employers should identify deficiencies in the workforce’s skills to determine if the certification is necessary. “First, determine there is a need for the certification,” Delk explained. “If you’re sitting around a table and saying, ‘You know, it might be fun to have a certification in this area,’ that’s certainly not relevant and valid enough to start it.” Ideally, once the skills gap is identified, a certification will be developed and rolled out promptly. However, this is seldom the case, as job roles and requisite expertise change rapidly and program managers face resource limitations. For Scott Grams, director of the GIS Certification Institute (GISCI), forming a credentialing program in geographical information systems (GIS) took more than a decade. “Certification was an idea that had been discussed in the geographic information systems community for sometime—probably for 10 or 15 years in backroom discussions at various conferences,” he said. “As GIS continued to grow and got integrated into disciplines like planning emergency management, crime analysis, health and environmental sciences, etc., this profession sort of emerged out of it. In order to have a true profession, a number of GIS professionals felt that there needed to be some kind of credentialing program and a code of ethics.” Roughly five years ago the Urban and Regional Information Systems Association researched the need for GIS certification to see if it was viable. “What it did was create a committee of 40 individuals from a wide variety of disciplines—academia, non-profit organizations and private and public sectors,” Grams said. “All those individuals started to investigate how such a program would work. Would it be examination based? Portfolio based? Would there be different tiers of certification? Would it be a binary certification—you’re in or you’re out?” Once the ball gets rolling on the certification, determine how the levels of proficiency will be evaluated. To find a method to identify GIS professionals’ needs, GISCI ran a pilot program for the first few months of the certification’s existence. It used the applications the candidates submitted to the organization to identify their abilities. “The first versions of the program were very open,” Grams said. “The documentation requirements weren’t as strong. While the pilot program was going on, the committee kept meeting, and they were given updates of the program and saw all of the portfolios. The application wasn’t changed dramatically but significantly enough. They really wanted to do a certification program based on an application, and the only thing that’s going to give that approach some teeth is by having strict documentation requirements.” When the pilot phase was complete, GISCI decided that it would keep this approach, evaluating applicants through a points system based on education, professional experience and industry contributions. It eschewed exam-based evaluations because of the diversity of GIS solutions. “There are different GIS platforms, and they felt creating an examination that involved all of these various activities would be something that numerous groups in the profession would be debating about until the end of time,” Grams said. –Brian Summerfield, email@example.com
<urn:uuid:0abcda39-f70e-4946-bf83-21325c3056f1>
CC-MAIN-2017-04
http://certmag.com/identifying-a-skills-gap-in-the-workforce/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954563
1,009
2.578125
3
Harzhauser M.,Natural History Museum Vienna | Djuricic A.,Natural History Museum Vienna | Djuricic A.,Vienna University of Technology | Mandic O.,Natural History Museum Vienna | And 11 more authors. Palaeogeography, Palaeoclimatology, Palaeoecology | Year: 2015 We present the largest GIS-based data set of a single shell bed, comprising more than 10,280 manually outlined objects. The data are derived from a digital surface model based on high-resolution terrestrial laser scanning (TLS) and orthophotos obtained by photogrammetric survey, with a sampling distance of 1. mm and 0.5. mm, respectively. The shell bed is an event deposit, formed by a tsunami or an exceptional storm in an Early Miocene estuary. Disarticulated shells of the giant oyster Crassostrea gryphoides are the most frequent objects along with venerid, mytilid and solenid bivalves and potamidid gastropods. The contradicting ecological requirements and different grades of preservation of the various taxa mixed in the shell bed, along with a statistical analysis of the correlations of occurrences of the species, reveal an amalgamation of at least two pre- and two post-event phases of settlement under different environmental conditions. Certain areas of the shell bed display seemingly significant but opposing shell orientations. These patterns in coquinas formed by densely spaced elongate shells may result from local alignment of neighboring valves due to occasional events and bioturbation during the years of exposure. Similarly, the patchy occurrence of high ratios of shells in stable convex-up positions may simply be a result of such "maturity" effects. Finally, we document the difficulties in detecting potential tsunami signatures in shallow marine settings even in exceptionally preserved shell beds due to taphonomic bias by post-event processes. © 2015. Source
<urn:uuid:afb1ab0d-ab4d-4bc3-b45a-f10c5f1de8e1>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/finnish-geospatial-research-institute-in-the-national-land-survey-of-finland-362624/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911951
403
2.859375
3
DARPA Seeks To Learn From Social For WarfareAgency aims to explore how the use of social media--particularly on mobile devices--can be used to help wage military campaigns. Slideshow: 14 Most Popular Government Mobile Apps (click image for larger view and for slideshow) The Department of Defense (DOD) aims to develop new ways to use social media sites like YouTube and Facebook to help it better leverage the technology for military engagements. The Defense Advanced Research Projects Agency (DARPA) is seeking proposals for a "new science of social networks" through a program called Social Media in Strategic Communication (SMISC), according to a Broad Agency Announcement (BAA) posted on the FedBizOpps.gov site. The agency aims to use social media on "an emerging technology base," including but not limited to mobile devices, which DARPA said is a key driver for how social media can change the game for the military. "The conditions under which our Armed Forces conduct operations are rapidly changing with the spread of blogs, social networking sites, and mediasharing technology (such as YouTube), and further accelerated by the proliferation of mobile technology," according to the BAA. "Changes to the nature of conflict resulting from the use of social media are likely to be as profound as those resulting from previous communications revolutions." DARPA believes that by using social media effectively, the DOD can better understand the environment in which it operates and use information more nimbly to support its missions, according to the announcement. For example, the agency said in one instance, the military was trying to find a certain individual and rumors of that person's location were circulating in the social media world. Because of the rumors, people communicating on social media sites were calling for the military to attack the rumored location. However, by monitoring those rumors and sending out "effective messaging" to dispel them before they were verified, an unnecessary and unwarranted attack was averted, according to DARPA. "This was one of the first incidents where a crisis was (1) formed (2) observed and understood in a timely fashion and (3) diffused by timely action, entirely within the social media space," according to the BAA. There are several specific goals for the SMISC program, according to the BAA. The first is to detect, classify, measure, and track how ideas are formed, developed, and spread via social media, as well as how purposeful or deceptive messaging and misinformation are used. DARPA also aims to develop recognition of persuasion campaign structures and influence operations across social media sites and communities, as well as to identify the participants and intent of these campaigns, as well as measure their effects. Finally, the agency plans to detect influence operations of its adversaries and counter their messaging, according to the BAA. The initial date for proposals for the program is Aug. 30, with final papers due to be submitted by Oct. 11. DARPA will hold an Industry Day about the program on Aug. 2. What industry can teach government about IT innovation and efficiency. Also in the new, all-digital issue of InformationWeek Government: Federal agencies have to shift from annual IT security assessments to continuous monitoring of their risks. Download it now. (Free registration required.)
<urn:uuid:165b3776-95e6-492d-8c5b-0e1ced7792bb>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/darpa-seeks-to-learn-from-social-for-warfare/d/d-id/1099012
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00290-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958199
673
2.578125
3
http://www.eweek.com/article2/0,4149,1408909,00.asp By Dennis Fisher December 10, 2003 Security experts have found a new way to exploit a critical vulnerability in Windows that evades a workaround and enables the attacker to compromise a number of machines at one time. The new attack could also lead to the creation of another fast-spreading Windows worm, the experts warned. The workaround in question is for the buffer overrun flaw in the Windows Workstation Service, which is enabled by default in Windows 2000 and XP. An attacker who successfully exploits the weakness could run any code of choice on the vulnerable machine. Microsoft Corp. issued a patch for the vulnerability in November, but the security bulletin also listed several workarounds for the flaw, including disabling the Workstation Service and using a firewall to block specific UDP and TCP ports. But penetration testers at Core Security Technologies, a Boston-based security company, discovered a new attack vector that uses a different UDP port. This attack still allows the malicious packets to reach the vulnerable Workstation Service. The attack takes advantage of several characteristics of the UDP protocol. Unlike TCP, UDP is "connectionless," meaning that there is no TCP-style handshake, and you need not establish a connection with a remote machine in order to send a UDP packet. Also, because the Internet's DNS service uses the protocol, UDP packets usually have no trouble traversing firewalls. These factors combine to make it possible for an attacker to send a broadcast UDP packet containing the malicious code to multiple machines on a given network. The traffic can be disguised to look like DNS packets, further obscuring the attack. "If someone hasn't applied the patch but blocked the ports as they should have, they're still vulnerable," said Max Caceres, a product manager at Core Impact. The patch for the Workstation Service vulnerability does protect against this latest attack, Caceres said. Core Security notified Microsoft of its findings earlier this week. - ISN is currently hosted by Attrition.org To unsubscribe email majordomo@private with 'unsubscribe isn' in the BODY of the mail. This archive was generated by hypermail 2b30 : Thu Dec 11 2003 - 04:09:48 PST
<urn:uuid:886226a9-60d6-4594-9137-927f8db092f4>
CC-MAIN-2017-04
http://lists.jammed.com/ISN/2003/12/0048.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00106-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931922
462
2.546875
3
FAA Infrastructure: Air Traffic System HistoryBy Chris Preimesberger | Posted 2008-10-14 Email Print Transitioning off of legacy systems is never easy, but it’s especially challenging if you are an agency of the U.S. government such as the FAA (Federal Aviation Administration). Real progress on a next-generation system is being made, but you wouldn’t necessarily know it if you read some news headlines about FAA system failures this year. Beyond being a nuisance to airlines and travelers, experts and former employees of the FAA are calling flight-plan system failures a warning sign for peril. FAA Infrastructure: Air Traffic System History Most localized air traffic control systems in use today were designed in the 1960s and '70s and installed throughout those years and into the '90s. Radar has been used since World War II. Many technologies are used in air traffic control systems. Primary and secondary radar is used to enhance a controller's "situational awareness" within his assigned air space; all types of aircraft send back primary echoes of varying sizes to controllers' screens as radar energy is bounced off their skins. Transponder-equipped aircraft reply to secondary radar interrogations by giving an ID (Mode A), an altitude (Mode C) and/or a unique call sign (Mode S). Certain types of weather also may register on a radar screen. The traffic-handling systems used at most international airports are highly proprietary. Systems engineers are tight-lipped about them in general. They work hand in hand with the flight-plan system and have many redundancies built into them. Andy Isaksen, a computer scientist for the FAA in Atlanta, was the designer of the flight-plan system. In a 2005 NetworkWorld article, Isaksen told Deni Connor that the NADIN system's two Philips DS714/81 mainframe computers were originally manufactured in 1968 and upgraded with new processors in 1981. Since then, they have been getting increasingly harder to maintain, support and write code for, Isaksen said. The Isaksen flight-plan network is the centerpiece of the FAA's air traffic system. Any aircraft that enters or leaves U.S. air space has to file a plan into the system. The network also serves as the sole data interchange between the United States and other nations to distribute flight plans for commercial and general aviation, as well as weather and advisory notices to pilots. To its credit, the air traffic system probably has been running around the clock 99.9 percent of the time since the tail end of the Reagan administration. But the time has come for it to be replaced, and everybody knows it.
<urn:uuid:27ce5675-637a-430d-be24-708b3f49a650>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Infrastructure/Moving-Off-FAA-Mainframes-The-Challenges-of-Transition/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00226-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953466
545
2.6875
3
There are challenges in its future no doubt, but NASA has made progress using the International Space Station as an advanced research lab according to an audit released this week by the space agency's Inspector General, Paul Martin. Given its $60 billion construction price tag and almost $3 billion in annual operating costs, it is essential that NASA make a concerted effort to maximize the research capabilities of the ISS, the IG stated. The report went on to say that NASA uses three significant data points - all of which have been trending in the right direction -- to assess utilization of ISS research capabilities: - Average weekly crew time - Since 2011, NASA has exceeded its goal of spending an average of 35 hours per week on scientific investigations. - Number of investigations - For the fiscal year (FY) ending October 2008, NASA performed 62 investigations. Since then, the annual number of investigations has been above 100. - Use of allocated space - NASA expects that the utilization rate for space allocated for research purposes will increase in FY 2013 from about 70 to 75% for internal space and from 27 to 40% for external sites. - While no one measure provides a complete picture of the utilization rate, NASA has generally increased the level of activity for each metric since completion of ISS assembly in 2011. There has been criticism in the past about the use - or lack thereof -- of the ISS as a high-tech lab outpost. The international team that runs the ISS which includes Canada, Europe, Japan, Russia, and the US says has too focused on expanding space-based research. At one time about 150 experiments are ongoing and more than 600 experiments have been conducted since research began about 11 years ago, the group says. These experiments have led to advances in the fight against food poisoning, new methods for delivering medicine to cancer cells and the development of more capable engines, robotics and materials for use on Earth and in space. Moving forward though there are many challenges, the NASA report stated. Probably the main issue is money in one form or another. In August 2011, NASA signed a cooperative agreement with the Center for the Advancement of Science in Space, Inc. (CASIS) to manage non-NASA research on the ISS. NASA currently provides $15 million annually to CASIS and the group is expected to raise additional funding through membership fees and donations. The success of this effort largely hinges on two factors: the ability of CASIS to attract sufficient interest and funding from private users and investors, and the availability of reliable transportation to and from the Station for crew and cargo, NASA stated. "CASIS's task is particularly challenging given the historic lack of interest from private entities in conducting research aboard the ISS in the absence of government funding. While CASIS's general goals for FY 2013 to award research grants from funds raised through donations and approve more self-funded investigations are positive first steps toward enhancing a market for non-NASA research aboard the ISS, neither CASIS nor NASA have developed specific, quantifiable metrics to measure CASIS's ability to meet these goals," the NASA IG stated. Maximizing the ISS's research capabilities also depends upon the success of NASA's Commercial Cargo and Crew Programs. The Cargo Program is essential to ensuring the capacity to ferry experiments to and from the Station and the commercial crew vehicles currently under development will make it possible to staff the ISS with a full complement of seven crew members (rather than the current six), thereby increasing the amount of crew time available for research. "The continued availability of dependable transportation for cargo and crew to the ISS is a key factor in maximizing the research capabilities of the Station. Four cargo vehicles currently support the ISS - Space Exploration Technologies Corporation's (SpaceX) Dragon, the Russian Progress, the European Automated Transfer Vehicle, and the Japanese H-II Transfer Vehicle. A fifth vehicle, Orbital Sciences Corporation's (Orbital) Cygnus, is expected to begin cargo flights in late 2013. All but the Progress carry NASA payloads to the Station, but only the Dragon can return experiments and other cargo to Earth. The other vehicles burn up during atmospheric reentry and therefore are suitable only for trash disposal. The Dragon's return capability is critical to maximizing the Station's research capabilities, as many experiments require samples to be brought back to Earth for analysis and examination," the NASA report stated. After NASA retired the Space Shuttle in 2011, the Russian Soyuz became the only vehicle capable of transporting crew to the ISS. Between 2006 and 2008, NASA purchased one seat per year. Beginning in 2009, NASA started purchasing six seats per year. The price per seat has increased over the years from $22 million in 2006, to $25 million in 2010, to $28 million in the first half of 2011. During the second half of 2011, the price per seat jumped to $43 million.4 The price has continued to increase, NASA said. Check out these other hot stories:
<urn:uuid:2eb59f8b-d115-4110-8b8f-efab49d72118>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224911/security/money-at-heart-of-international-space-station-developing-premier-research-lab-chops.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940014
1,000
2.890625
3
Malware, a portmanteau of “malicious software”, doesn’t just affect desktop and laptop computers. Smartphones, miniature computers in their own right, also have their own vulnerabilities to these diabolical digital diseases. As cell phones evolved into smartphones and grew more complex and more capable, they became more open to attack by mobile malware. Mobile malware often enters victims’ phones through sketchy third-party apps and suspicious websites the user may visit using their phone’s Internet browser. Some especially mischievous mobile malware will send text messages to premium-rate phone numbers without the owner’s knowledge or consent, running up charges on their phone bills. Smartphone malware often tricks the user into granting it root-level permissions, allowing it to wreak all sorts of havoc on the phone. Mobile spyware can steal passwords, account numbers, and other personal information from your smartphone and distribute it—often for a price—to nefarious third parties. It can compromise your organization’s security and allow sensitive and confidential data to leak out. Malware on your smartphone can allow others to monitor and track your whereabouts. To understand more about how mobile malware works, feel free to check out Gillware Digital Forensics’ blog post in which we dissect and examine HummingBad, a prolific and sophisticated example of mobile malware. HummingBad’s central purpose is to generate false ad revenue to the fraudsters’ benefit. HummingBad encrypts its most malicious components, making it difficult for mobile anti-malware systems to detect it. What Is Mobile Malware? Most malicious malware have several goals. One goal is to collect information from the user, which can be sold to third parties and used to perpetrate fraud or identity theft. Another goal is to aggravate or disadvantage the user. Mobile malware will often generate intrusive or annoying advertisements (which can have the benefit of generating false ad revenue for the fraudster), send messages without the user’s knowledge to run up their phone bills, compromise the phone’s performance, cause apps to crash without warning, or drain the phone’s battery. Trojans are the most common form of mobile malware. Like malware on desktop and laptop computers, mobile malware is often spread through social engineering. Trojans, like the legendary wooden horse, disguise themselves as seemingly-reputable files or apps but actually aim to steal data or interfere with the device’s operation. Masquerading as legitimate and benign apps proves to be the most effective vector of attack for malware programmers. Malware, especially trojans, usually gain the victim’s trust by purporting itself to be a legitimate app, or an attachment in a legitimate-seeming email or SMS message. Mobile malware can also infect users through compromised advertisements on websites accessed through the phone’s Internet browser. One particularly vicious type of malware, ransomware, encrypts user files and demands monetary payment in order to decrypt them. Ransomware distributors have made millions of dollars off of these viruses, although the exact figures are difficult to determine. Ransomware mainly targets users of desktop and laptop PCs, but there are forms of ransomware aimed at Android and iOS smartphones as well. Mobile ransomware rarely encrypts user files the way ransomware aimed at PCs does. Instead, mobile ransomware locks the phone’s screen and demands a payment in order to “unlock” the device and allow the phone to function normally again. The family of mobile ransomware include malefactors such as Pletor, Fusob, and Svpeng. Potentially Unwanted Applications (PUAs) In addition to mobile malware such as trojans and worms, there are other potentially unwanted applications which can end up on a phone, such as adware, trackware, and spyware. Adware can collect user information in order to provide targeted ads. Trackware can gather data on the phone’s user and report it back to a third party. Spyware can allow another person to access the text messages, multimedia, and GPS information on an infected user’s phone, and even listen in on the user’s phone calls. Unlike mobile malware, these applications do not normally aim to incapacitate the user’s phone or disadvantage the user. Potentially unwanted applications are, as their names suggest, potentially unwanted. They do have legitimate and non-malicious uses in the right hands. Adware and trackware on your phone, for example, can be placed on your phone by legitimate providers to deliver user-targeted search results and advertisements informed by your location. Even mobile spyware does have legitimate uses—for example, an employer can place spyware on company-provided mobile phones in order to ensure that the employees only uses the devices for legitimate company purposes. Mobile spyware apps are usually free or cheap and leave little evidence of their presence. Most mobile anti-malware apps may not be able to detect mobile spyware. When these applications are used legitimately, the user consents to have them placed on their phone. These applications become unwanted when they are used by a malicious third party to violate the user’s privacy without their consent. The opportunities to misuse these applications, especially spyware, are nearly endless, and can be exploited by malicious employers, or by abusive spouses, partners, parents, or stalkers. Some forms of spyware are extremely invasive and can activate an infected user’s camera or microphone without their knowledge. Mobile Forensics and PUAs Potentially-unwanted applications leave as little of a footprint as possible, in order to avoid detection by the user. However, as any mobile forensics investigator worth their salt is well aware, everything leaves behind a trace. When a phone’s owner has reason to believe that somebody seems to have intimate knowledge of the owner’s movements or contents of their phone calls or SMS messages, a skilled mobile forensics investigator can examine the phone for any telltale traces of spyware. Threats to Smartphone Security While all mobile operating systems are bound to have bugs and unavoidable security holes, mobile malware tends to target Android devices than iOS devices, with Android threats comprising the vast majority of all discovered mobile malware threats. This is mainly due to Android’s dominance of the worldwide market share, with over 80% of all smartphones running some version of the open-source OS. Operating System Fragmentation Operating system fragmentation can make it difficult for smartphone manufacturers to effectively halt the spread of mobile malware by patching their smartphones’ security holes. Over the years, both the Android and iOS operating systems have seen numerous updates and revisions. But not every smartphone user carries the latest version of their phone’s O/S in their pockets. This often happens not just because users are lazy or hesitant to upgrade, but also because users of older models simply cannot upgrade their phones due to a lack of hardware support. With the users of smartphones fragmented like this, patches to the bugs and oversights in mobile O/Ses which mobile malware developers exploit cannot reach all smartphone users. For example, a Samsung-distributed security patch for Bob’s Samsung Galaxy running Android Kitkat would not help Alice, whose older model of Samsung smartphone cannot support any version of Android more advanced than Ice Cream Sandwich. What Services Does Gillware Digital Forensics Offer for Victims of Mobile Malware? One of the principles of digital forensics, and the field of forensics in general, is that everything leaves a trace. No matter how “undetectable” mobile malware or spyware purports itself to be, a skilled forensic investigator with intimate knowledge of Android, iOS, and other mobile device O/S architecture and mobile forensics can find the tiny footprints it inevitably leaves behind. Gillware Digital Forensics leverages the expertise of our president Cindy Murphy, a digital forensics investigator with decades of experience in the field, and Gillware Data Recovery, a data recovery lab that has recovered data from all forms of storage devices for over ten years. The mobile forensics experts at Gillware Digital Forensics can provide complete forensic analysis of any model of Android or iOS smartphone. A full forensic analysis of an infected smartphone can determine what type of malware or PUA infected the phone, how the malefactor gained entry, and in which ways the phone and its owner’s data and personal information has been compromised. To get started, follow the link below to request an initial consultation with Gillware Digital Forensics.
<urn:uuid:4f3af66f-69ae-49f5-a89d-f9788164e818>
CC-MAIN-2017-04
https://www.gillware.com/forensics/mobile-malware
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907341
1,765
2.859375
3
By Betty Hoeffner In classrooms throughout America, adolescents and teens experience painful things — whether it be anxiety, depression, disorders or being bullied. Some come from homes where a cycle of domestic abuse exists and suffering is the norm. Others experience stress, bullying or pressure at school during the day. No matter the cause, though, many of these students are dealing with their pain and suffering improperly. It isn’t uncommon for these students to numb their emotional pain with binge drinking, drug use or other destructive behaviors. And it’s no secret that many of these behaviors can lead to these young people losing their lives. Why Are Teens Hurting? Teen self-worth depends on the approval of others, and their desire for social acceptance can drive them to engage in destructive behaviors, even if they know these behaviors are harmful. A recent study by the Partnership for a Drug-Free America showed that 73 percent of teens report the number one reason for using drugs is to deal with the pressures and stress at school. A 2007 Partnership Attitude Tracking study from the Partnership also reported 65 percent of teens say they use drugs to “feel cool.” The same study found that 65 percent of teens use drugs to “feel better about themselves.” This shows that not only are teens using drugs to feel cool to others, but they’re also using drugs to feel good about themselves. When adolescents are bullied, they too experience lower self-worth and emotional problems such as depression. A 2010 National Institute of Health study reports that more than a fifth of U.S. adolescents in school had been physically bullied at least once in the past two months. Additionally, 53.6 percent had been verbally bullied, and 51.4 percent felt socially bullied. Electronic, or cyberbullying, on computers, smartphones and other digital devices can lead to even higher levels of depression than traditional face to face forms of aggression. And bullies feel depressed, too. Bullying is associated with other problems, such as substance abuse, obesity, racism and youth suicide. How Should Teens Learn to Heal? Clearly, the statistics point to an issue — adolescents today are feeling pain and making harmful choices to try and fix that pain. A solution had to be created. Based on these staggering statistics and my own personal experience, I started a program called, Deal, Feel & Heal. The program gently guides youth in peer-to-peer learning settings and by using emotional learning methodologies to help them discover that resorting to drugs and alcohol because of bullying, and other forms of hurtful behavior, is not a road they, or their friends, should consider. My second objective was to influence any students who may be using drugs or alcohol to stop this destructive habit. The United Nations Office on Drugs and Crime studied the use of peer-to-peer strategies in reducing and preventing drug abuse. The technique is found to work because those who fall on the same peer group feel more comfortable communicating with each other. Clear communication lines mean that there is more room for understanding and learning. Research conducted by Roger Weissberg, professor of psychology at the University of Illinois Chicago, in the American Journal of Psychology*, found that students who enrolled in social and emotional learning empathy teaching programs scored at least 10 percentage points higher on achievement test than peers who weren’t. This research, which was conducted through 300 scientific studies, also found that discipline problems were cut in half. According to Weissberg, “Some teachers may be skeptical [about social and emotional learning] at first, but they are won over when their students learn more, are more engaged and better problem solvers.” An insight from young people As part of my program’s development, I worked with four groups of students in Indiana and Texas. These groups were made up of 20 high school students, 20 middle school students and 13 elementary school students, and we spent three, one-hour activity sessions together. Participating students were informed that their wisdom was going to be shared in a book and curriculum I was writing called “Deal Feel Heal.” They were excited to know their insights would be helping youth all over the world. Gaining their insights makes it possible for educators, adults, guardians and leaders to understand what exactly is impacting young lives today. Following are what the youth said about the pain they felt: What Hurts Us? How Do We Handle Hurt? When teenagers and adolescents incorrectly deal with pain, they can beat themselves up about what they’re going through, block out their feelings or direct their pain toward others through bullying. Additionally, young people dealing with negative emotions might commit crimes, turn to substance abuse, develop disorders or experience depression or anxiety. As an adult, you might notice adolescents in your life go through the aforementioned situations. They may push those away who try to help. Ignoring their pain is just as harmful as dealing with it in the wrong way, such as through extreme behaviors. Why Do We Ignore or Stuff Away Pain? What Are the Dangerous Outcomes of Stuffing Our Feelings? Young people answered that the outcomes of suffering were harmful behaviors such as drinking while driving, self-harm (like cutting themselves), drug use, hoarding and suicide. Other outcomes, although extreme and some physical, can include: obesity, baldness, hallucinations, high blood pressure, violence toward others and mental disorders. How Can We Deal with Pain? What Can You Do if a Teen is Hurting? If a young person in your life is hurting or even considering suicide, you should step in and help — always. Tell a trusted adult, or call a suicide hotline. It’s also a great idea to help him or her find a counselor or therapist to talk to. Praying, attending church or finding a religious guide can also help your friend in a tough time. The best thing you can do is be there for your friend or young person and help them navigate through this time in a healthy way that does not involve harming himself or others. To learn more about dealing, feeling and healing emotions, check out my book at www.dealfeelheal.org. The accompanying curriculum will be released in September 2016. *Source: American Journal of Psychology, Vol. 58, No.6/7 2003, pages 466-474. Betty Hoeffner is the founder of Prevent Bullying Now and the co-founder/President of Hey U.G.L.Y. – Unique Gifted Lovable You, the international nonprofit organization that empowers youth to be part of the solution to bullying, substance abuse and suicide. Hoeffner is the author of the Stop Bullying Handbook – A Guide for Students And Their Friends; Hue-Man Being – A Book To End Racism; and DEAL FEEL HEAL – Keys to Understanding and Healing Emotional Pain. Hoeffner and the Hey U.G.L.Y. organization have been an instrumental partner in informing Impero Education Pro’s internet safety keyword libraries, helping to update them for US audiences.
<urn:uuid:c67032c9-bc4c-43a8-84a4-0a42a58be4bd>
CC-MAIN-2017-04
https://www.imperosoftware.com/bullying-depression-disorders-cause-pain-for-students-how-can-we-help-them-deal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956116
1,462
3.28125
3
Dilts T.E.,University of Nevada, Reno | Weisberg P.J.,University of Nevada, Reno | Yang J.,University of Nevada, Reno | Olson T.J.,University of Nevada, Reno | And 2 more authors. Annals of the Association of American Geographers | Year: 2012 In arid regions of the world, the conversion of native vegetation to agriculture requires the construction of an irrigation infrastructure that can include networks of ditches, reservoirs, flood control modifications, and supplemental groundwater pumping. The infrastructure required for agricultural development has cumulative and indirect effects, which alter native plant communities, in parallel with the direct effects of land use conversion to irrigated crops. Our study quantified historical land cover change over a 150-year period for the Walker River Basin of Nevada and California by comparing direct and indirect impacts of irrigated agriculture at the scale of a 10,217 km 2 watershed. We used General Land Office survey notes to reconstruct land cover at the time of settlement (1860-1910) and compared the settlement-era distribution of land cover to the current distribution. Direct conversion of natural vegetation to agricultural land uses accounted for 59 percent of total land cover change. Changes among nonagricultural vegetation included shifts from more mesic types to more xeric types and shifts from herbaceous wet meadow vegetation to woody phreatophytes, suggesting a progressive xerification. The area of meadow and wetland has experienced the most dramatic decline, with a loss of 95 percent of its former area. Our results also show Fremont cottonwood, a key riparian tree species in this region, is an order of magnitude more widely distributed within the watershed today than at the time of settlement. In contrast, areas that had riparian gallery forest at the time of settlement have seen a decline in the size and number of forest patches. © 2012 Taylor and Francis Group, LLC. Source Sedinger J.S.,University of Nevada, Reno | White G.C.,Colorado State University | Espinosa S.,100 Valley Road | Partee E.T.,15 East 4th Street | Braun C.E.,Grouse Inc. Journal of Wildlife Management | Year: 2010 We used band-recovery data from 2 populations of greater sage-grouse (Centrocercus urophasianus), one in Colorado, USA, and another in Nevada, USA, to examine the relationship between harvest rates and annual survival. We used a Seber parameterization to estimate parameters for both populations. We estimated the process correlation between reporting rate and annual survival using Markov chain Monte Carlo methods implemented in Program MARK. If hunting mortality is additive to other mortality factors, then the process correlation between reporting and survival rates will be negative. Annual survival estimates for adult and juvenile greater sage-grouse in Nevada were 0.42 ± 0.07 (x ̄ ± SE) for both age classes, whereas estimates of reporting rate were 0.15 ± 0.02 and 0.16 ± 0.03 for the 2 age classes, respectively. For Colorado, average reporting rates were 0.14 ± 0.016, 0.14 ± 0.010, 0.19 ± 0.014, and 0.18 ± 0.014 for adult females, adult males, juvenile females, and juvenile males, respectively. Corresponding mean annual survival estimates were 0.59 ± 0.01, 0.37 ± 0.03, 0.78 ± 0.01, and 0.64 ± 0.03. Estimated process correlation between logit-transformed reporting and survival rates for greater sage-grouse in Colorado was ρ 0.68 ± 0.26, whereas that for Nevada was ρ 0.04 ± 0.58. We found no support for an additive effect of harvest on survival in either population, although the Nevada study likely had low power. This finding will assist mangers in establishing harvest regulations and otherwise managing greater sage-grouse populations. © The Wildlife Society. Source Coates P.S.,U.S. Geological Survey | Casazza M.L.,U.S. Geological Survey | Ricca M.A.,U.S. Geological Survey | Brussee B.E.,U.S. Geological Survey | And 8 more authors. Journal of Applied Ecology | Year: 2016 Predictive species distributional models are a cornerstone of wildlife conservation planning. Constructing such models requires robust underpinning science that integrates formerly disparate data types to achieve effective species management. Greater sage-grouse Centrocercus urophasianus, hereafter 'sage-grouse' populations are declining throughout sagebrush-steppe ecosystems in North America, particularly within the Great Basin, which heightens the need for novel management tools that maximize the use of available information. Herein, we improve upon existing species distribution models by combining information about sage-grouse habitat quality, distribution and abundance from multiple data sources. To measure habitat, we created spatially explicit maps depicting habitat selection indices (HSI) informed by >35 500 independent telemetry locations from >1600 sage-grouse collected over 15 years across much of the Great Basin. These indices were derived from models that accounted for selection at different spatial scales and seasons. A region-wide HSI was calculated using the HSI surfaces modelled for 12 independent subregions and then demarcated into distinct habitat quality classes. We also employed a novel index to describe landscape patterns of sage-grouse abundance and space use (AUI). The AUI is a probabilistic composite of the following: (i) breeding density patterns based on the spatial configuration of breeding leks and associated trends in male attendance; and (ii) year-round patterns of space use indexed by the decreasing probability of use with increasing distance to leks. The continuous AUI surface was then reclassified into two classes representing high and low/no use and abundance. Synthesis and applications. Using the example of sage-grouse, we demonstrate how the joint application of indices of habitat selection, abundance and space use derived from multiple data sources yields a composite map that can guide effective allocation of management intensity across multiple spatial scales. As applied to sage-grouse, the composite map identifies spatially explicit management categories within sagebrush steppe that are most critical to sustaining sage-grouse populations as well as those areas where changes in land use would likely have minimal impact. Importantly, collaborative efforts among stakeholders guide which intersections of habitat selection indices and abundance and space use classes are used to define management categories. Because sage-grouse are an umbrella species, our joint-index modelling approach can help target effective conservation for other sagebrush obligate species and can be readily applied to species in other ecosystems with similar life histories, such as central-placed breeding. © 2016 British Ecological Society. Source Shanthalingam S.,Washington State University | Goldy A.,Washington State University | Bavananthasivam J.,Washington State University | Subramaniam R.,Washington State University | And 13 more authors. Journal of Wildlife Diseases | Year: 2014 Mannheimia haemolytica consistently causes severe bronchopneumonia and rapid death of bighorn sheep (Ovis canadensis) under experimental conditions. However, Bibersteinia trehalosi and Pasteurella multocida have been isolated from pneumonic bighorn lung tissues more frequently than M. haemolytica by culture-based methods. We hypothesized that assays more sensitive than culture would detect M. haemolytica in pneumonic lung tissues more accurately. Therefore, our first objective was to develop a PCR assay specific for M. haemolytica and use it to determine if this organism was present in the pneumonic lungs of bighorns during the 2009-2010 outbreaks in Montana, Nevada, and Washington, USA. Mannheimia haemolytica was detected by the species-specific PCR assay in 77% of archived pneumonic lung tissues that were negative by culture. Leukotoxin-negative M. haemolytica does not cause fatal pneumonia in bighorns. Therefore, our second objective was to determine if the leukotoxin gene was also present in the lung tissues as a means of determining the leukotoxicity of M. haemolytica that were present in the lungs. The leukotoxin-specific PCR assay detected leukotoxin gene in 91%of lung tissues that were negative for M. haemolytica by culture. Mycoplasma ovipneumoniae, an organism associated with bighorn pneumonia, was detected in 65%of pneumonic bighorn lung tissues by PCR or culture. A PCR assessment of distribution of these pathogens in the nasopharynx of healthy bighorns from populations that did not experience an all-age die-off in the past 20 yr revealed that M. ovipneumoniae was present in 31%of the animals whereas leukotoxin-positive M. haemolytica was present in only 4%. Taken together, these results indicate that culture-based methods are not reliable for detection of M. haemolytica and that leukotoxin-positive M. haemolytica was a predominant etiologic agent of the pneumonia outbreaks of 2009-2010. © Wildlife Disease Association 2014. Source
<urn:uuid:cdf8c1c9-7ce4-4d1e-a4b2-97bd9e543db3>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/100-valley-road-51929/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913351
1,943
2.765625
3
The rapid development of the Internet age, fiber optic modem is good for you, especially when you’re dealing with large amounts of data.Fiber optic modems,someone also called fibre optic modems.This type of modem, you can quickly and efficiently transfer data.Under normal circumstances, the fiber optics modem provides two modes of multi-mode and single-mode. Fiber optics modem receives the incoming optical signal by optical fiber cable, and translate them back to the electronic form of full-duplex transmission. They are available in single channel and multi-channel configuration.FiberStore fiber optic modems are available in various form factors depending upon the protocol selected, such as RS-232/RS-485/RS-422 Fiber Optic Modem. Our FOM has a higher bandwidth and greater electromagnetic immunity than wire-based modems. Together with multimode or single-mode fiber, the fiber optic modem allows data to be transmitted and convert electrical signals to light. It provides transmission distance up to 2km (multimode) or up to 20km/40km/60km (single-mode). The FOM allows users to replace existing coaxial cable communication links with lightweight fiber optic cable. The advantages of using fiber optic cables are as follow: 1) Lighter weight and smaller size for much quicker deployment 2) Higher bandwidth for increased throughput 3) Lower loss for long distance repeater less communication up to 16 kilometers 4) Better quality-safe from electromagnetic interference from any source 5) More secure – no electromagnetic signature 6) Less expensive Note:Fiber optic modem is the new kid on the block as it joins cable, DSL, satellite and dialup in the battle for Internet access superiority. Although it’s not available in all areas, its higher speeds and reliability make it a major contender. Internet or network connections that require a fiber optic modem are more commonly used commercially rather than residentially. Not all Internet service providers offer a fiber optic option, so the first step to choosing a fiber optic modem is to make sure you actually need one. Most home Internet connections use copper wires and coaxial cables, though these may connect to fiber optic wiring at the curb. Check with your Internet provider to see what types of modems can work with your particular service.
<urn:uuid:4e21adb1-2e6a-471a-9bb3-4845a5399636>
CC-MAIN-2017-04
http://www.fs.com/blog/help-you-to-buy-fiber-optic-modems-from-fiberstore.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.87679
472
3
3
Kepler space telescope's mission may be over - By Frank Konkel - May 22, 2013 The reaction wheels, shown in this diagram, are the source of Kepler's woes. (NASA graphic) NASA engineers have successfully transitioned its planet-seeking Kepler telescope to "point rest state" -- an effort to conserve fuel while an evaluation team plots a recovery effort following a mechanical failure that crippled the craft's navigational capabilities on May 4. In point rest state, the $550 million spacecraft's four reaction wheels will not be powered, instead using a combination of its thrusters and solar pressure to tip the spacecraft back and forth like a pendulum as it orbits the sun some 40 million miles away from Earth. The first of its four reaction wheels failed in July 2012, and the recent failure of a second made it impossible for the craft to pivot with the necessary precision for its 95-megapixel camera to ingest starlight and funnel it through a 1.4-meter wide mirror. Some of Kepler's redundant systems, powered off as scientists tried to isolate the problem, have been powered on to keep the spacecraft within nominal operating parameters, according to Kepler Mission Manager Roger Hunter, but Kepler will not be collecting any scientific data in point rest state. If an anomaly response team that includes members of the NASA Ames Research Center, the Jet Propulsion Laboratory, contractor Ball Aerospace and reaction wheel manufacturer UTC cannot find a solution to the failed reaction wheels, Kepler's day of spotting planets may be over forever. "For now, (point rest state) is working very well and keeping Kepler safe," Hunter said. "The team will continue to analyze recent telemetry received from the spacecraft. This analysis, and any planned recovery actions, will take time, and will likely be on the order of weeks, possibly months." The team will validate any such efforts on the spacecraft test bed, he added. NASA launched Kepler in 2009 to seek Earth-like, potentially habitable planets in the Milky Way galaxy. It has found 132 confirmed such planets, and more than 2,700 potential ones that scientists will attempt to confirm using ground telescopes in the coming months and years. Kepler detects planets by watching for changes in the brightness of stars. When an object orbiting the star comes between Kepler's view of the star and Earth, Kepler can detect minuscule changes in brightness, sometimes providing enough information for scientists to estimate the potential planet's size and orbiting distance from the star. Scientists have said they expect more than 90 percent of the planets Kepler has found thus far to be confirmed. NASA officials said scientists will sift through a large amount of so-far unexamined Kepler data over the next two years. Frank Konkel is a former staff writer for FCW.
<urn:uuid:463def56-e026-4c06-bd71-a6ddab0cc638>
CC-MAIN-2017-04
https://fcw.com/articles/2013/05/22/kepler-planet-mission.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933269
562
2.96875
3
Definition: A distribution sort where input elements are initially distributed to several buckets based on an interpolation of the element's key. Each bucket is sorted if necessary, and the buckets' contents are concatenated. Also known as bin sort. Generalization (I am a kind of ...) Specialization (... is a kind of me.) histogram sort, counting sort, top-down radix sort, postman's sort, distributive partitioning sort. See also range sort, radix sort, hash heap. Note: A bucket sort uses fixed-size buckets (or a list per bucket). A histogram sort sets up buckets of exactly the right size in a first pass. A counting sort uses one bucket per key. The space required is one bucket for every few possible key value, but is O(n log log n) taking into account a distribution of keys. That is, some buckets will have a lot of keys. Bucket sorts work well for data sets where the possible key values are known and relatively small and there are on average just a few elements per bucket. This means the cost of sorting the contents of each bucket can be reduced toward zero. The ideal result is if the order in each bucket is uninteresting or trivial, for instance, when each bucket holds a single key. The buckets may be arranged so the concatenation phase is not needed, for instance, the buckets are contiguous parts of an array. Bucket sorts can be stable. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 16 November 2009. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "bucket sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/bucketsort.html
<urn:uuid:3834c054-c8da-4a12-a889-571dd899d723>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/bucketsort.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869372
423
3.15625
3
In this article Foundational Security Operations Concepts The certified information systems security professional (CISSP) certification is a prestigious qualification for professionals working in the field of cybersecurity. The certification requires an exhaustive preparation in eight domains, ranging from security and risk management, asset security, security engineering, communication and network security, identity and access management, security and assessment testing, and security operations to software development security. These eight domains thoroughly cover a variety of subjects related to cybersecurity. They provide the indispensable managerial knowledge base to train well-rounded information security professionals. The complexity of security challenges incessantly evolves in cyberspace. Security measures therefore have to accommodate such a dynamic landscape. In some demanding security environments, most notably, security operations centers (SOC), personnel should be highly cautious of the dynamics of network threats. They should be wary of static security solutions and obsolete practices to manage and evaluate security threats. The CISSP thus provides an authoritative framework for the necessary competences required in managerial duties of SOCs. The candidates of the CISSP are trained to be proficient in the foundational security operations concepts and their applications. This capacity can be considered as a preliminary preparation for further technical training. Foundational Security Operations Concepts Overview In many highly digitalized and networked economies, SOCs represent the heart and the veins of large organizations—in particular, the financial institutions and government departments that manage huge volumes of network traffic and valuable digital assets. SOCs are the top administration structures that define, supervise, and coordinate information and communication technologies for their affiliated organizations. They adopt an advance security information and event management system (SIEM) to scrutinize anomalies of systems and networks to ensure effective and well-functioning operations for their organizations as well as respond to the adversaries. Examples of SOCs include a security defense center (SDC) and a network security operations center (NSOC) that are often found in the intelligence and military service of governments. Since the missions and duties of SOCs can be sensitive and decisive in the daily functioning of an organization, the management of SOC thus requires a set of rigorous fundamental concepts in security operations. These conditions can be grouped into six categories: the principle of need-to-know (NTK) and least privilege (POLP), the separation of duties and responsibilities, the monitoring of special privileges, the rotation of job duties, the lifecycle of information, and the service-level agreements. The monetary and human resources required for setting up a SOC can be tremendous. Moreover, the high requirements and capabilities of running a SOC might pose a management challenge for organizations. These two considerations may lead to organizations outsourcing their SOCs to external parties. Indeed, entrusting the sophisticated cyber-management to seasoned professionals can be an ideal approach to optimize the investment against cyber-threats. Nevertheless, ceding the control and protection of critical information systems and operations, such as accounting, legal, and payroll, to a third party is an important decision. For building or subcontracting the SOC of a large organization, personnel (CISSP professionals) who understand and are capable of implementing the management methodology of the six aspects of fundamental security operations concepts are imperative. Principle of Least Privilege (POLP) and Need-to-Know (NTK) To begin, the concepts of POLP and NTK are complementary to each other, like the two sides of the same coin. On the one hand, the POLP refers to limiting the workstation, operation system, and applications to the minimal functioning level that the operating personnel need to perform their duties. A ubiquitous example can be the different accounts of an operation system: administrator, employee, guest and visitor, to name a few. Most of the time, the non-administrator accounts are not permitted to install, configure, and modify applications that are unknown and can eventually pose a threat to the existing operation environment. This measure can help prevent low-level personnel from mistakenly or maliciously installing or activating remote access tools (RAT) and various types of malware. On the other hand, the NTK describes the same limitations on the personnel level that can be understood as the confidentiality of data. It signifies the least data and information that the operator needs to know carry out his duties. POLP and NTK always work hand in hand. One example is the manipulation of a particular information management system. The non-administrator personnel do not need to know the information about how to install and set up their workstation. They simply need the minimal knowledge of how to use the system that is configured with the POLP idea. This is a typical scenario of one of the key security operations concepts. Monitoring Special Privileges The notions of POLP and NTK are sometimes subject to change on occasions when the relevant personnel have to be granted more system privileges and data access in order to perform their duties fully. Such occasions can occur, for example, when a higher-level system administrator or manager is absent for a significant period of time. Consequently, current lower-level personnel might have to stand in for the relevant role. The temporary ease or increase in system privileges is called privilege bracketing. The duration and appropriate accesses are strictly defined and restricted to the least necessary ones. In addition, despite the fact that the POLP and NTK notions are supposed to apply to all personnel, there are inevitably some with higher, unrestricted, and more special privileges that might arouse concerns of possible authority abuse. Organizations should not neglect the need to monitor those personnel who have exclusive access to and authority over the entire database and network systems. Registering system logs and regular third-party system/site audits are efficient preventive monitoring measures. In case of abusive activities, such system records can serve forensic purposes for legal pursuits. Separation of Duties and Responsibilities Besides relying on third-party monitoring systems and adopting the notions of PLOP and NTK, organizations can also strengthen their defense level with internal mechanisms to separate the duties of personnel holding key functions and responsibilities. The core idea is to break down the decision-making process into a multi-stakeholder model to ensure that no single person can execute a decision alone. Each step of the process should be assigned to a different person so as to establish checks and balances within the decision-making process. In summary, the separation of duties and responsibilities should involve multiple personnel in different stage of a decision process. In general, the making, execution, monitoring, and evaluation of a decision should therefore be assigned to four different personnel. In this way, it cannot be the same personnel monopolizing the entire decision implementation cycle, and thus it minimizes the conflict of interests. Nonetheless, it should be noted that the separation of duties and responsibilities is one of the underlying principles that facilitates the management of a SOC. It does not necessarily reduce the chance of misconduct to zero. Instead, it signifies that corrupting the whole chain of trust will be compulsory. Organizations should regularly check and study the duties of the personnel to verify that there is no conflict of interests. In fact, the fundamental security concepts of a robust security management environment emphasize compartmentalizing privileges, duties, accesses, and responsibilities. It is obvious that large organizations have become increasingly digitalized and that the personnel administering the databases and networks can accumulate power as well as identifying vulnerabilities in the decision-making process over time. Besides constant evaluation of the work flow and system logs to ensure no privilege exploits, a further step is job rotation. The accumulation of power in key positions not only affects decision implementation processes, it can also deal considerable damage to the institution in the case of revenge resignation of these key personnel. They might leave their position with all the credentials and savoir-faire under short notice. This creates a professional vacuum that can paralyze the work routine of the organization. Job rotation addresses that problem and it can generate additional advantages. It further prevents the monopoly of duties and encourages professional mobility for other personnel within the organization. Information Lifecycle Management The aforementioned concepts focus on managing the human aspect, in other words, the “soft” side of cybersecurity. The personnel are employed to safeguard the digital assets and networks. Taking a step backward, if the digital assets and sensitive information are ambiguous and they lack management, it is unlikely that the organization can effectuate appropriate management practices. Thus, the strategies for managing the duties of personnel can be further perfected through a rigorous assessment regarding the data and information that the organization is protecting. Identifying information assets in the organization is always the first phase of managing information. The identification process will pave the way to the evaluation of various digital assets. The organization and its security professionals can then design defense mechanisms, internal workflow (POLP, NTK and separation of duties), countermeasures, business continuity and response plans based on the values of the identified digital assets. There are two major guidelines assisting the development of information lifecycle management. First, organizations should be able to assign an appropriate timeframe to define data/information categories. Digital assets should be categorized as short-term, mid-term, long-term, and permanent. Second, a reasonable estimate of monetary value suggested in the digital assets should be accorded with pertinent protection investment. It will not make sense to allocate $20 of a security budget to secure a digital asset worth $1. Neither would it be sensible to store obsolete data permanently. Service-Level Agreements (SLAs) Information security management involves a complex supply chain. Both internal personnel and external service providers are bound together to create competent solutions for security operations purposes. In this context, the external service provider has to provide a well-structured SLA stating clearly the different services, resources, liabilities, performance, and other crucial conditions related to the implementation of the specific service. The SLA is a good contract and practice for third-party service providers to align the objectives with the purchasing organization. It is also the occasion for both parties to discuss the detailed terms for overseeing the operations scenarios. Various ideas about the management of privileges and information lifecycle are all indispensable topics in drafting the SLA. In a way, the SLA can be seen as a checklist even for the internal personnel to see if all the privileges and authorities issues are settled. Going through the fundamental security concepts and suggestions of their applications facilitates the understanding of deploying effective security policies for organizations. As one of the eight domains of the CISSP certification, these concepts lay the foundation for the implementation of elementary security practices. The need to manage significant data and information flow as well as their security has expanded progressively in the last decade. Having a seasoned team of cybersecurity professional to oversee security threats is no longer reserved for government institutions. Companies and other organizations having high-valued digital assets should not hesitate to consult professional advice when it comes to managing their SOCs. Recent Articles and Updates - Adding a Section to PE Binary - API Call Logging Part I - Security Awareness Training for the European Union General Data Protection Regulation (EU GDPR) - An Introduction to tmux - USV CTF - An Examination of the Caesar Methodology, Ciphers, Vectors, and Block Chaining - Analysis of a spam bot - Stapler Walkthrough - Integrate WHONIX with Kali Linux to Achieve Anonymity - 10 Security Vulnerabilities That Broke the World Wide Web in 2016 - Unprotected MongoDB Installations: child's play for hackers - A Brief Summary of Encryption Method Used in Widespread Ransomware
<urn:uuid:20606bc5-5ca9-4ae4-bb07-1e7ec6467b03>
CC-MAIN-2017-04
http://resources.infosecinstitute.com/foundational-security-operations-concepts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00447-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922246
2,339
2.84375
3
As a systems administrator, you should already be familiar with the basics of memory, such as the differences between physical and virtual memory. What you might not fully understand is how the Virtual Memory Manager (VMM) works in AIX® 7 and how it relates to performance tuning. In AIX 7, it is also worth considering the effect of virtual memory and how it is used and applied within workload partitions (WPAR). This article looks at some of the monitoring tools you can use to tune your systems and outline some of the more important AIX 7 memory management systems, including how the virtual memory manager operates and the effects of the dynamic variable page size. The implementation of these enhancements, as they apply to your systems environment, can optimize memory performance on your box. While you might find tuning your memory to be more difficult than other subsystems, the reward is often greater. Depending on the type of system you are running, there might also be specific tuning recommendations that should be set on your systems. To help validate the findings, let's use a specific example and discuss some best practices for setting these parameters. Tuning one or two parameters on the fly, in some cases, can make a significant difference in the overall performance of your system. One area that does not change, regardless of which subsystem you are looking to tune, is tuning systems—you should always think of it as an ongoing process. The best time to start monitoring your systems is when you have first put your system in production, and it is running well (rather than when your users are screaming about slow performance). You can never really be sure if there is a problem without a real baseline of what the system looked like when it was behaving normally. Further, only one change should be made at a time, and data should be captured and analyzed as quickly as possible after that change to determine what difference, if any, the change really made. This section gives an overview of memory as it relates to AIX 7. We discuss how AIX 7 uses virtual memory to address more memory than is physically on your system. We also explain how the Virtual Memory Manager (VMM) actually works and how it services requests. Any discussion of memory and AIX 7 must start with a description of the VMM. AIX newbies are sometimes surprised to hear that the VMM services all memory requests from the system, not just virtual memory. When RAM is accessed, the VMM needs to allocate space, even when there is plenty of physical memory left on the system. It implements a process of early allocation of paging space. Using this method, the VMM plays a vital role in helping manage real memory, not just virtual memory. Here is how it works. In AIX 7, all virtual memory segments are partitioned into pages. For AIX 7, the default size per page is 4KB, but it can be altered to different ranges depending on the processor environment being used. The default page size is 4KB per page, although this can be changed. POWER5+ or later can also use 64KB, 16MB, and 16GB page sizes. The POWER4 architecture can also support the 16MB page size. The 16MB page size is known as large and the 16GB page size as huge; both have use cases for very large memory applications. For POWER6, variable page size support (VPSS) was introduced, which means that the system will use larger pages as the application requests larger chunks of memory. The different page sizes can be mixed within the OS concurrently, with different applications making use of different page sizes. In addition, pages can be dynamically resized, collecting different segments of 4KB pages to make 64KB pages. This improves performance by allowing the application to access the memory in larger single chunks, instead of many smaller chunks. The pages can be dynamically resized from 4KB to 64KB. Tuning of VPSS can be managed using the vmo tuning tool. Allocated pages can be either RAM or paging space (virtual memory stored on disk). VMM also maintains what is referred to as a free list, which is defined as unallocated page frames. These are used to satisfy page faults. There are usually a very small amount of unallocated pages (which you configure) that the VMM uses to free up space and reassign the page frames to. The virtual memory pages whose page frames are to be reassigned are selected using the VMM's page replacement algorithm. This paging algorithm determines which virtual memory pages currently in RAM ultimately have their page frames brought back to the free list. AIX 7 uses all available memory, except memory that is configured to be unallocated and known as the free list. To reiterate, the purpose of VMM is to manage the allocation of both RAM and virtual pages. From here, you can determine that its objectives are to help minimize both the response time of page faults and to decrease the use of virtual memory where it can. Obviously, given the choice between RAM and paging space, most people would prefer to use physical memory, if the RAM is available. What VMM also does is classify virtual memory segments into two distinct categories. The categories are working segments using computational memory and persistent segments using file memory. It is extremely important to understand the distinction between the two, as this helps you tune your systems to their optimum capabilities. Computational memory is used while your processes are actually working on computing information. These working segments are temporary (transitory) and only exist up until the time a process terminates or the page is stolen. They have no real permanent disk storage location. When a process terminates, both the physical and paging spaces are released in many cases. When there is a large spike in available pages, you can actually see this happening while monitoring your system. When free physical memory starts getting low, programs that have not used recently are moved from RAM to paging space to help release physical memory for more real work. File memory (unlike computational memory) uses persistent segments and has a permanent storage location on the disk. Data files or executable programs are mapped to persistent segments rather than working segments. The data files can relate to filesystems, such as JFS, JFS2, or NFS. They remain in memory until the file is unmounted, a page is stolen, or a file is unlinked. After the data file is copied into RAM, VMM controls when these pages are overwritten or used to store other data. Given the alternative, most people would much rather have file memory paged to disk rather than computational memory. When a process references a page which is on disk, it must be paged, which could cause other pages to page out again. VMM is constantly lurking and working in the background trying to steal frames that have not been recently referenced, using the page replacement algorithm discussed earlier. It also helps detect thrashing, which can occur when memory is extremely low and pages are constantly being paged in and out to support processing. VMM actually has a memory load control algorithm, which can detect if the system is thrashing and actually tries to remedy the situation. Unabashed thrashing can literally cause a system to come to a standstill, as the kernel becomes too concerned with making room for pages than actually doing anything productive. Active memory expansion In addition to the core memory settings and environment, AIX 7 can take advantage of the power of the POWER7 CPU to provide active memory expansion (AME). AME compresses data within memory, allowing you to store keep more data in memory, and reduce the amount of page swapping to disk as data is loaded. The configuration of AME is based on individual LPARs, so you can enable it for your database partition and keep more data read from disk in memory, but disable it for web servers, where the information stored in memory is swapped regularly. To prevent all information being compressed, memory is split into two pools, a compressed pool and an uncompressed pool. AIX 7 automatically adjusts the size of the two pools according to the workload and configuration of the logical partition. The compression amount is defined using a compression ratio, that is, if your LPAR has been granted 2048MB you can specify a compression ratio of 2.0 and be given an effective memory capacity of 4096MB. Because different applications and environments are capable of different compression ratios (for example, heavy text applications may benefit from higher ratios), you can use the amepat command to monitor and determine the possible compression ratio with your given workload. You should run amepat with a given interval (in minutes) and the number of iterations, while you run your normal applications in the background to collect the necessary information. This will lead to a recommendation for the compression ratio to be used within the LPAR. You can see a sample of this in Listing 1. Listing 1. Getting Active Memory Expansion statistics Command Invoked : amepat 1 1 Date/Time of invocation : Fri Aug 13 11:43:45 CDT 2010 Total Monitored time : 1 mins 5 secs Total Samples Collected : 1 System Configuration: --------------------- Partition Name : l488pp065_pub Processor Implementation Mode : POWER7 Number Of Logical CPUs : 4 Processor Entitled Capacity : 0.25 Processor Max. Capacity : 1.00 True Memory : 2.00 GB SMT Threads : 4 Shared Processor Mode : Enabled-Uncapped Active Memory Sharing : Disabled Active Memory Expansion : Disabled System Resource Statistics: Current --------------------------- ---------------- CPU Util (Phys. Processors) 0.04 [ 4%] Virtual Memory Size (MB) 1628 [ 79%] True Memory In-Use (MB) 1895 [ 93%] Pinned Memory (MB) 1285 [ 63%] File Cache Size (MB) 243 [ 12%] Available Memory (MB) 337 [ 16%] Active Memory Expansion Modeled Statistics : ------------------------------------------- Modeled Expanded Memory Size : 2.00 GB Achievable Compression ratio :2.10 Expansion Modeled True Modeled CPU Usage Factor Memory Size Memory Gain Estimate --------- ------------- ------------------ ----------- 1.00 2.00 GB 0.00 KB [ 0%] 0.00 [ 0%] 1.14 1.75 GB 256.00 MB [ 14%] 0.00 [ 0%] Active Memory Expansion Recommendation: --------------------------------------- The recommended AME configuration for this workload is to configure the LPAR with a memory size of 1.75 GB and to configure a memory expansion factor of 1.14. This will result in a memory gain of 14%. With this configuration, the estimated CPU usage due to AME is approximately 0.00 physical processors, and the estimated overall peak CPU resource required for the LPAR is 0.04 physical processors. NOTE: amepat's recommendations are based on the workload's utilization level during the monitored period. If there is a change in the workload's utilization level or a change in workload itself, amepat should be run again. The modeled Active Memory Expansion CPU usage reported by amepat is just an estimate. The actual CPU usage used for AME may be lower or higher depending on the workload. You can monitor the current compression within a configured LPAR using the svmon tool, as shown here in Listing 2. Listing 2. Using svmon to get compression stats # svmon -G -O summary=longame,unit=MB Unit: MB Active Memory Expansion -------------------------------------------------------------------- Size Inuse Free DXMSz UCMInuse CMInuse TMSz TMFr 1024.00 607.91 142.82 274.96 388.56 219.35 512.00 17.4 CPSz CPFr txf cxf CR 106.07 18.7 2.00 1.46 2.50 The DXMSz column is the important one here, as it shows the deficit in expanded memory. Deficits occur when the compression ratio specified cannot be achieved, and the system starts to use memory that cannot be created out of compression. Therefore, you need to be careful about over specifying the compression ratio. One other artifact of AME is that the memory sizes displayed by most tools, including vmstat and others, typically show the expanded memory size (for example, the configured memory multiplied by the compression ratio), not the actual memory size. Look for the true memory size in the output of different tools to determine the actual memory available without compression. Let's examine the tools that can allow you to tune the VMM to optimize performance for your system. Here is an example of an environment where you want to tune parameters using a certain type of methodology, along with some of the key parameters you need to be aware of. In AIX 7, the vmo tool is responsible for all of the configuration of the tunable parameters of the VMM system. This replaces the old vmtune tool available in AIX 5. Altering the page size provides the most immediate performance improvement and is due to the reduction of Translation Lookaside Buffer (TLB) misses, which occurs because the TLB can now map to a much larger virtual memory range. For example, in high performance computing (HPC), or an Oracle® database, either Online Transaction Processing (OLTP) or a Data Warehouse application, can benefit when using large pages. This is because Oracle uses a lot of virtual memory, particularly with respect to its System Global Area (SGA), which is used to cache table data, among other things. The command in Listing 3 allocates 16777216 bytes to provide large pages with 128 actual large pages. Listing 3. Allocating bytes # vmo -r -o lgpg_size=16777216 lgpg_regions=128 If you want to use large pages in combination with shared memory, which is often used in HPC and database applications, you will also need to set the v_pnshm value: # vmo -p -o v_pinshm=1. The most important vmo settings are maxperm. Setting these parameters determine the appropriate value for your system to ensure that it is tuned to either favor computational memory or file memory. In most cases, you do not want to page working segments, as doing so causes your system to page unnecessarily and decrease performance. The way it worked in the past was actually quite simple: If your file pages ( numperm%) were greater than the actual maxperm%, then the page replacement would only steal file pages. When it falls below minperm, it could steal both file and computational pages. If it was between both, then it would only steal file pages, unless the number of file repages was greater than the amount of computational pages. Another way of looking at this is if your numperm is greater than the maxperm, then you would start to steal from persistent storage. Based on this methodology, the old approach to tuning your parameters was to bring maxperm to an amount <20 and minperm to <=10. This is how you would have normally tuned your database server. That has all changed. The new approach sets maxperm to a high value (for example, >80) and makes sure the lru_file_repage parameter is set to 0. lru_file_repage was first introduced in AIX Version 5.2 with ML4 and on ML1 of AIX Version 5.3. This parameter indicates whether or not the VMM re-page counts should be considered and what type of memory it should steal. The default setting is 1, so you need to change it. When you set the parameter to 0, it tells VMM that you prefer that it steal only file pages rather than computational pages. This can change if your numperm is less than the minperm or greater than the maxperm, which is why you would now want maxperm to be high and minperm to be low. Let's not lose sight of the fact that the primary reason you need this value tuned is because you want to protect the computational memory. Getting back to the example, Oracle uses its own cache, and using AIX 7 file caching for this purpose only causes confusion, so you want to stop it. If you were to reduce the maxperm in this scenario, then you would now make the mistake of stopping the application caching programs that are running. Listing 4 sets these critical tuning parameters. Listing 4. Setting tuning parameters vmo -p -o minperm%=5 vmo -p -o maxperm%=90 vmo -p -o maxclient%=90 Although you used to have to change these parameters, you now leave strict_maxclient at their default numbers. If strict_maxperm were changed to 1, it would place a hard limit on the amount of memory that could be used for persistent file cache. This is done by making the maxperm value the upper limit for the cache. These days it is unnecessary, because changing the lru_file_repage parameter is a far more effective way of tuning, as you would prefer not to use AIX 7 file caching. Two other important parameters worth noting here are maxfree. If the number of pages on your free list falls below the minfree parameter, VMM starts to steal pages (just to add to the free list), which is not good. It continues to do this until the free list has at least the number of pages in the In older versions of AIX when the default set at 120, you would commonly see your free list at 120 or lower, which led to more paging than was necessary, and worse, threads needing free frames were actually getting blocked because the value would be so low. To address this issue, the default values of maxfree were bumped up in AIX Version 5.3 to 960 and 1088, respectively. If you are running AIX Version 5.2 or lower, we recommend these settings, which you can manually change using the commands in Listing 5. Setting the maxfree parameters manually vmo -p -o minfree=960 vmo -p -o maxfree=1088 Configuring variable page size support VPSS works by using the default 4KB page size. Once the application has been allocated 16 4KB blocks, assuming all the blocks are in current use, they are promoted to be a single 64KB block. This process is repeated for as many 16-count sequences of 4KB blocks as the application is using. Two configurable parameters control how VPSS operates. The first is simply enabling the multiple page size support. The vmm_support tunable, configured with vmo, sets how the VMM operates. A value of 0 indicates that only the 4KB and 16MB page sizes are supported. A value 1 allows the VMM to use all the page sizes supported by the processor. A value of 2 allows the VMM to use multiple page sizes per segment and is the default for all new installations. With the multiple page size support enabled, the vmm_default_pspa parameter controls how many sequential pages are required for the smaller 4KB pages to be promoted to the 64KB page size. Some applications, particularly those that use a lot of memory, may perform better with a 64KB page size even though they don't use full 64KB pages. In this case, you can use the vmm_default_pspa parameter to specify that less than 16 4KB pages are required for promotion, expressed as a percentage. The default value of 0 indicates that 16 pages are required. A value of 50 indicates that only 8 pages are required. A value of 0 has the effect of promoting all 4KB pages to 64KB pages. As discussed, before you tune or even start monitoring AIX 7, you must establish a baseline. After you tune, you must capture data and analyze the results of your changes. Without this type of information, you never really understand the true impact of tuning. In Part 1 of this series, and, where appropriate, we covered the effect of using AME to squeeze more memory out of your systems. You also tuned an Oracle system to optimize utilization of the memory subsystem. You examined some important kernel parameters, what they do, and how to tune them, including how to make the best use of the variable page size support. Part 2 focuses much more on the detail of systems monitoring for the purposes of determining memory bottlenecks, along with analyzing trends and results. Part 3 focuses primarily on swap space and other methods to tune your VMM to maximize performance. - AIX memory affinity support: Learn about AIX memory from the IBM System p™ and AIX InfoCenter. - IBM Redbooks: See how Database Performance Tuning on AIX is designed to help system designers, system administrators, and database administrators design, size, implement, maintain, monitor, and tune a Relational Database Management System (RDMBS) for optimal performance on AIX. - "Power to the people" (developerWorks, May 2004): Read this article for a history of chip making at IBM. - "Processor affinity on AIX" (developerWorks, November 2006): Using process affinity settings to bind or unbind threads can help you find the root cause of troublesome hang or deadlock problems. Read this article to learn how to use processor affinity to restrict a process and run it only on a specified CPU. - "CPU monitoring and tuning" (March, 2002): Read this article to learn how standard AIX tools can help you determine CPU bottlenecks. - Operating system and device management: This document from IBM provides users and system administrators with complete information that can affect your selection of options when performing such tasks as backing up and restoring the system, managing physical and logical storage, and sizing appropriate paging space. - "nmon performance: A free tool to analyze AIX and Linux® performance" (developerWorks, February 2006): This free tool gives you a huge amount of information all on one screen. - "nmon analyser—A free tool to produce AIX performance reports" (developerWorks, April 2006): Read this article to learn how to produce a wealth of report-ready graphs from nmon output. - The AIX 7.1 Information Center is your source for technical information about the AIX operating system. - The IBM AIX Version 7.1 Differences Guide can be a useful resource for understanding changes in AIX 7.1. - The IBM AIX Version 6.1 Differences Guide can be a useful resource for understanding changes in AIX 6.1. - Popular content: See what AIX and UNIX content your peers find interesting. - AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - New to AIX and UNIX?: Visit the New to AIX and UNIX page to learn more about AIX and UNIX. - Search the AIX and UNIX library by topic: - Safari bookstore: Visit this e-reference library to find specific technical resources. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - Podcasts: Tune in and catch up with IBM technical experts. - Future Tech: Visit Future Tech's site to learn more about their latest offerings. Get products and technologies - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the developerWorks blogs and get involved in the developerWorks community. - AIX 7 Open Beta: This forum is for technical discussions supporting the AIX 7 Open Beta Program. - Follow developerWorks on Twitter. - Get involved in the My developerWorks community. - Participate in the AIX and UNIX® forums:
<urn:uuid:b4e94add-2b51-4120-be21-939c1eeae255>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-aix7memoryoptimize1/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00447-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88858
5,011
2.515625
3
Virginia Gov. Tim Kaine has signed a number of bills aimed at increasing the amount of energy generated from renewable sources. In addition, he has amended three which the legislature is set to reconsider on April 8th. The governor signed legislation increasing incentives for the production of biofuels, turning municipal waste and compost into energy, encouraging "green roofs" projects, and increasing renewable energy sourcing targets. HB 1994 is designed to encourage investor-owned incumbent electric utilities to produce 15 percent of their electricity from renewable sources by 2025. HB 2001 increases incentives for the production of biofuels from algae, cellulosic sources and winter crop cover. The governor's office noted that increased support for cellulosic sources would encourage the development of biofuels that do not affect food supply or feed/food prices. Using winter crop cover as a biofuel source also helps reduce agricultural runoff -- which, in turn, helps reduce pollution of the Chesapeake Bay. Cellulosic crops such as switchgrass and other warm season grasses do not require tilling every year and can serve as stream buffers. Algae can be grown to clean up polluted waters and provide a source for biodiesel production. The bill also lowers the minimum annual production requirement for producers to be eligible for grants to one million tons per year. The Governors Office said lowering the production requirement would make more producers eligible for grants and lead to more green jobs. HB 2171 provides that any farm that owns and operates facilities to generate electricity from waste-to-energy technology is excluded from regulation as a public utility. According to the Governor's Office, a person who obtains the majority of his income from farming activities and produces the waste that is used in the generation of the electricity may connect to the electric grid in accordance with state regulations and will not be considered a manufacturer under state law. The governor also signed bills that encourage electricity production from municipal waste and encourage local governments to provide financing for energy efficiency and clean energy improvements. HB 2576 clarifies that solid waste management facilities that produce electricity from solid waste meet the definition of a qualified project under the Public-Private Education Facilities and Infrastructure Act of 2002. This will encourage more of these projects through financial incentives achieved through partnerships. The genesis of this bill is Arlington's landfill and waste-to-energy plant, where municipal solid waste is converted into enough electricity to supply 23,000 homes. In addition to the energy generated, these facilities can significantly reduce the volume of waste put into landfills. SB 1212 grants localities the authority to create energy financing programs. Originally designed to apply only to Charlottesville, this bill now applies to all localities. With this bill, localities could enter into contract with landowners to install energy efficiency and clean energy improvements. The improvements would be paid for with an assessed fee based on the cost and attached to billings such as water and sewer charges. The governor hopes this bill encourages green energy contractors in the state to step up their hiring. The governor also signed a set of bills that authorize localities to adopt ordinances that set forth incentives to promote the construction of "green roofs" on private homes and businesses. A green roof is any roof that provides for the generation of renewable energy or roofs designed in accordance with the state's storm water management program's standards and specifications with regard stormwater control mechanisms. Under SB 1350, the Marine Resources Commission now has the authority to lease sub-aqueous lands for the purpose of generating electrical energy from renewable sources, transmit energy from such sources to shore, and ensure that any such leases require a royalty. Any money collected through this initiative will be appropriated to the Virginia Coastal Energy Research Consortium. The bill also directs VMRC to determine whether sufficient and appropriate sub-aqueous lands exist to support a commercial offshore wind farm and, if such land exists, offer it for development in a lease auction. Amended bills returned to the legislature for reconsideration include SB 1248, HB 2506 SB 1339 and HB 2155. SB 1248, which would have required statewide reduction of electricity consumption by 19 percent of 2006 levels by 2025, now makes that goal voluntary with incentives for utilities that meet it. The Governor's Office noted that this bill has the support of both the electric utilities and environmental groups. SB 1339, which increases the commonwealth's renewable portfolio standard goal from 12% to15% by 2025 to ensure that utilities aggressively pursue renewable energy, was also amended. The amended legislation will be considered during on April 8th.
<urn:uuid:ce3cdd4b-feea-4620-b567-29f087b4de01>
CC-MAIN-2017-04
http://www.govtech.com/technology/Virginia-Gov-Signs-Raft.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950342
916
2.6875
3
This article examines the different behaviors when running SQL statements with the system and SQL naming conventions. The first part of this article focused on how the system and SQL naming conventions resulted in different object ownerships and access authorities when creating IBM® DB2® objects with SQL. The naming convention also determines which character, either a slash (/) for System naming (*SYS) or a period (.) for SQL naming (*SQL), is used to separate the schema and the object name when DB2 object references are explicitly qualified with a schema. IBM i applications, however, rarely access DB2 objects by explicitly specifying the schema name. Instead, these applications rely on the library list being searched to find the appropriate objects. The first object found within the library list with the specified name and the appropriate object type will be used. When testing applications, additional libraries containing new programs and data set are simply inserted at the top of the library list. In this way, it is easy to work with a mix of old and new programs as well as production data and test data. Applications with explicitly qualified schema references have to be manually changed to run in different environments. With regard to the typical IBM i applications, let us analyze the different behaviors based on the naming convention for accessing unqualified database objects that are specified in the SQL statements. Persistent user data is only stored in tables. Data in a table can be accessed directly or indirectly through an alias or a view. Tables, views, or aliases can be accessed in SQL statements by explicitly qualifying the object with the schema name or by having the schema name implicitly resolved based on the naming convention. Data access methods Data in IBM DB2 for i objects can be accessed and maintained with record-level access interfaces which can be used in some high-level languages such as RPG or COBOL. However, SQL is the most common interface used to access data in the new IBM i applications and programs. SQL statements can be run either as static or dynamic SQL. The main difference between static and dynamic SQL is based on how the SQL statement itself is generated. - Static SQL statements Static SQL is heavily used in SQL routines or application programs with embedded SQL. A static SQL statement is hard-coded within the source of the program or routine. For static SQL statements, the SQL precompiler checks the SQL syntax, evaluates the references to tables and columns, and declares data types of all the host variables. The SQL precompiler also determines the schema to be used at run time for resolving unqualified database objects based on the naming convention used at compile time. From a performance perspective, using the static SQL is the best option because several steps (for example, syntax checking) are already done at compile time. Listing 1: Static SQL statement shows a static SQL statement embedded in a RPG program to determine the number of orders for a specific year. The year value is passed as the parameter value (ParYear) to the procedure. Listing 1: Static SQL statement D GetNbrOfOrders... D PI 10I 0 D ParYear 4P 0 Const ... D NbrOfOrders S 10I 0 /Free ... Exec SQL Select Count(*) Into :NbrOfOrders from Order_Header Where Year(OrderDate) = :ParYear; - Dynamic SQL statements Dynamic SQL statements are built during the run time execution of a program. After being constructed, the dynamic SQL statements are checked for syntax and then converted into an executable SQL statement that can be run. Listing 2: Dynamic SQL statement in a SQL routine shows the SQL script to create the UDF COUNT_NUMBER_OF_ROWS. With this function, the number of rows in any table, view, or alias can be determined. The table (or view or alias) name, as well as the schema name is passed as the parameter value. When calling the UDF, the SQL statement to be executed is built as string including passed parameter values. This string is checked for syntax and then converted into an executable SQL statement by running the PREPARE statement and finally executed by using the EXECUTE statement. As neither the table nor the schema to be accessed at run time is known at compile time, dynamic SQL is needed. Listing 2: Dynamic SQL statement in a SQL routine Create Function Count_Number_Of_Rows (ParTable VarChar(128), ParSchema VarChar(128)) Returns Integer Language SQL Not Fenced Begin Declare RtnNbrRows Integer; Declare String VarChar(256); Set String = 'Values(Select Count(*) From '; If ParSchema > ' ' Then Set String = String CONCAT ParSchema CONCAT '/'; End If; Set String = String CONCAT ParTable CONCAT ') into ?'; PREPARE DynSQL From String; EXECUTE DynSQL Using RtnNbrRows; Return RtnNbrRows; End; Dynamic SQL statements can be used in SQL routines or embedded in programs, but it is also most commonly used for running SQL statements from interfaces, such as ODBC or DB2 Web Query and from SQL command line processors, such as IBM System i® Navigator Run Script Interface or the RUNSQLSTM and RUNSQL command line commands. Listing 3: Dynamic SQL statement issued interactively shows a dynamic SQL statement run with the System i Navigator Run Script Interface. Listing 3: Dynamic SQL statement issued interactively Select * from Order_Header; Determining the default schema When an SQL statement contains an unqualified table, view, or alias reference, DB2 must determine the default schema and search for that schema. Schema is the SQL term analogous to an IBM i library. The initial value for the default schema depends on the naming convention used in the SQL environment and whether the SQL statement being run is static or dynamic. Default schema for static SQL statements When using embedded SQL, the default schema for static SQL statements can be explicitly set in the compile command (CRTSQLxxxI) using the DFTRDBCOL (default collection) parameter. Alternatively, a SET OPTION statement with the DFTRDBCOL parameter can also be included in the source code. In embedded SQL programs, only as single SET OPTION statement can be specified, even if the source code consists of several independent (exported) procedures. The SET OPTION statement must be placed in the source code as the first SQL statement. Listing 4: SET OPTION Statement embedded in RPG shows the excerpt of an RPG source with a SET OPTION statement to set the naming convention to SQL naming and to set the default schema for static SQL statements to SALESDB01. The SET OPTION statement is included immediately after the global D specifications that means as the first statement in the C specifications. Listing 4: SET OPTION Statement embedded in RPG D* Global D Specifications /Free EXEC SQL Set Option Naming = *SQL, DFTRDBCOL = SALESDB01; // RPG code and other embedded SQL Statements go here With SQL routines, the default schema can also be explicitly set by including a SET OPTION statement with the DFTRDBCOL parameter. In Listing 5: SET OPTION statement in an SQL routine , the default schema has been explicitly set to SALESDB01 for static SQL statements within the MyProcedure routine. Listing 5: SET OPTION statement in an SQL routine Create Procedure MyProcedure () Language SQL Set Option DFTRDBCOL = SALESDB01 Begin -- SQL Routine Body – Source code End; If the default schema is not explicitly set, it is determined at the compile time depending on the naming convention. - For system naming, the default schema is the job Library List (*LIBL) With system naming, the term schema can be misleading because the initial value is set to the special value *LIBL. This special value means that the library list is used and multiple schemas can be searched when trying to resolve unqualified object references. The first DB2 object found in the first library, which matches the unqualified specified database object name and object type, will be used. Database objects located in different schemas can be accessed within the same SQL statement without any schema specification. - For SQL naming, the default schema currently used in the SQL environment where the SQL routine is created will be adopted. Because application programs with embedded SQL are not created through an SQL interface, the default schema for the static SQL statements is set to the runtime authorization ID. On IBM i, the runtime authorization ID is the user profile of the job that is performing the compile. This means that the default behavior for SQL naming is for DB2 to try and find the unqualified object in the schema that has the same name as the creator's user profile. SQL naming allows only a single schema to be searched when resolving unqualified DB2 object references. Default schema for dynamic SQL statements For dynamic SQL statements, the default schema depends on whether a default schema value has been explicitly specified. If a default schema is not explicitly set, its initial value depends on the naming convention. - For System naming, the default schema is the job Library List (*LIBL). - For SQL naming, the default schema is the runtime authorization ID (current user profile). As stated earlier, the default behavior for SQL naming is for DB2 to try and find the unqualified object in a schema that has the same name as the current user profile. SET SCHEMA statement The value of the default schema can be changed on all the interfaces by running the SET SCHEMA statement. The new default schema value supplied by the SET SCHEMA statement is used only to resolve unqualified database objects by dynamic SQL statements. It has no effect when resolving unqualified object references for static SQL statements at run time. In the SET SCHEMA statement, special registers, such as USER, SESSION_USER, or SYSTEM_USER can also be specified. The special value *LIBL is not allowed, not even in an environment where the System naming convention is used. If the SET SCHEMA statement is run in an environment where the system naming convention is used, the library list will no longer be searched for dynamic SQL statements. Instead, the unqualified database objects are searched for in the single schema that was specified on the SET SCHEMA statement. The default schema to be searched at runtime for unqualified data access in dynamic SQL statements is also referred to as the current schema. The CURRENT_SCHEMA special register returns the schema value currently being used for resolving unqualified data access in dynamic SQL statements. It should be noted that current schema and current library are not identical terms. The current library is added to the current library list before the user portion of the list by running the CHGCURLIB (change current library) command. The library list is accessed only when System naming is used. Therefore, the current library can only be searched in composition with System naming. The current schema (or the default schema) is either the current library list (system naming) or a single schema (SQL naming) which may or may not be part of the current library list. SET SCHEMA statement and dynamic SQL interfaces Many of the DB2 for i SQL interfaces can execute the SET SCHEMA statement automatically on your behalf through the ability to specify a default schema value for that dynamic SQL interface. The mechanism for specifying a default schema value depends on the interface. - IBM System i Navigator Run SQL Scripts tool The default schema can be preset by clicking Connection JDBC Settings. The default schema or the library list can be specified on the System tab, as shown in the following figure. Figure 1: System i Navigator Run SQL Scripts – setting the default schema - Command line commands: RUNSQLSTM and RUNSQL The default schema can be specified on these SQL commands with the DFTRDBCOL (default collection) parameter. - ODBC connections ODBC connections can be defined by using - ODBC Administration. When accessing your tables and views with ODBC, the Default Schema can be set from the IBM i Access for Windows ODBC Administration interface or programmatically by setting the DefaultLibraries connection keyword. - SQL call level interface (CLI) When using SQL CLI functions, the schema to be used can be explicitly specified by setting the SQL_ATTR_DEFAULT_LIB or SQL_ATTR_DBC_DEFAULT_LIB environment or connection variables. - Java™ Database Connectivity (JDBC) or Structured Query Language for Java (SQLJ) The default schema can be set though the libraries' property object. - OLE DB using the IBM i Access Family OLE DB Provider The default schema can be explicitly specified through DefaultCollection in Connection Object Properties. - ADO .NET using the IBM i Access Family ADO .NET Provider The default schema can be explicitly specified through DefaultCollection in Connection Object Properties. Some of these interfaces allow a default schema and a default library list to be set. If a default schema is specified and the System naming conventions is used, it is possible that DB2 uses only the specified default Schema while ignoring the library list when resolving unqualified DB2 object references. Based on this behavior, it is better to avoid specifying a value for the default schema when using the System naming convention. To examine the different behaviors in accessing database objects (using either system or SQL naming), I created a test environment to represent a typical IBM i application. The test environment consists of four schemas: - Schema MASTERDB – master information Information such as the address of a particular customer, supplier, or an item is needed for multiple applications, such as Accounting, Purchase, Sales, ERP and so on. Schema MASTERDB contains the following tables: ADDRESS_MASTER, ITEM_MASTER, and ORDER_SUMMARY - Schema SALESDB01 and schema SALESDB02 – sales data The SALESDB01 and SALESDB02 schemas contain the necessary sales information for company 1 and company 2 respectively. Both the schemas include an ORDER_HEADER and ORDER_DETAIL table. - Schema SALESPGM – program schema Schema SALESPGM does not contain any data, but it is used as a container for all of the (service) programs, stored procedures, and user defined functions. Executing dynamic SQL statements In the following examples, let us examine the different behaviors when running dynamic SQL statements in composition with either System or SQL naming, using the System i Navigator Run Script tool as our dynamic SQL interface. Unqualified data access with system naming When executing Dynamic SQL statements in an environment where System naming is used and the default schema is not explicitly set, the current library list is searched to find all unqualified tables and views. The library list is initially set in any SQL interface based on the job description. However, the library list can be modified by running the commands such as CHGLIBL (change library list) or ADDLIBLE (add library list entry). When the default schema is changed by running a SET SCHEMA statement, the current library list will be ignored and the newly set (single) schema will be searched instead. The following example demonstrates this behavior by executing several SQL statements. Before running the first SELECT statement, the library list is explicitly set by executing the CHGLIBL (change library list) command. In the SELECT statement, the ORDER_HEADER table that is located either in the SALESDB01 or SALESDB02 schema, and the ADDRESS_MASTER table that is located in the MASTERDB schema are joined together. The SALESDB01 and the MASTERDB schemas are both part of the current library list. Consequently, both the tables are found and the SELECT statement is executed successfully. As the ORDER_HEADER table is found in the SALESDB01 schema, the requested data for company 1 is returned. Listing 6: Accessing data in multiple schemas with system naming CL: CHGLIBL LIBL(SALESDB01 MASTERDB QGPL); Select h.Company, h.OrderNo, AddressNo, a.Name1, a.Address, a.City From Order_Header h join Address_Master a Using(AddressNo); |1||100||1||Fischer & Co||Wald- und Wiesenweg 16||Dietzenbach| |1||110||3||Bauer GmbH||Nordring 417||Berlin| |1||120||4||Rathaus Center||Hauptstr. 3||Hamburg| |1||130||4||Rathaus Center||Hauptstr. 3||Hamburg| |1||140||4||Rathaus Center||Hauptstr. 3||Hamburg| To retrieve the same information for company 2, the SALESDB02 schema must be added to the library list. The SALESDB01 schema can be removed or must be located after the SALESDB02 schema in the library list. The library list must be changed by running a command, such as CHGLIBL or ADDLIBLE. If the SALESDB02 schema is set by running the SET SCHEMA statement, the default schema value is changed from *LIBL and is set to SALESDB02. When re-running the SELECT statement after this change, the execution fails with SQLSTATE 42704, because the ADDRESS_MASTER table that is located in the MASTERDB schema is not found in the SALESDB02 schema. Unqualified data access with SQL naming In an environment where SQL naming is used, only a single schema is searched at run time to resolve the unqualified tables, views, and aliases. When executing the SELECT statement presented in Listing 6: Accessing data in multiple schemas with system naming with the SQL naming convention, the SELECT statement will always fail with an SQLSTATE value of 42704, because the ADDRESS_MASTER and the ORDER_HEADER tables are located in different schemas. When using the SQL naming convention, the database objects in different schemas either have to be qualified or must be accessed through aliases or views that are located in the default schema. The aliases or views can reference tables or views in different schemas. Executing dynamic SQL in SQL routines or programs Dynamic SQL statements embedded in SQL routines or programs follow the same rules for resolving unqualified object references. However, dynamic SQL statements use the naming convention that was active when the SQL routine or program was created and not the naming convention that is specified when the SQL routine or program is run. For example, if an SQL stored procedure is created with the SQL naming convention, the dynamic SQL statements within that procedure will use the SQL naming rules for resolving unqualified names, even when that SQL stored procedure is called from an SQL interface that is using the System naming convention. The naming convention to be used for a program with embedded SQL can be defined at the compile time, through the OPTION parameter in the compile command or by embedding a SET OPTION statement within the source code. An SQL routine inherits the naming convention of the SQL interface that is being used to create the SQL routine. Even though a SET OPTION statement can be embedded in a SQL routine, specifying the NAMING option is not allowed. The default schema to be used for the dynamic SQL statements at runtime can be explicitly set from within the routine or program by running either a command (for modifying the library list) or executing a SET SCHEMA statement embedded in the source code. When modifying the library list within a SQL routine or application program, the modified library list is used by all the programs and procedures running within the same job. When running a SET SCHEMA statement within a SQL routine or an embedded SQL program, the SET SCHEMA setting will only be used by the dynamic SQL statements within this routine or program. The default schema value of the interface that called the SQL routine or program remains untouched. The following SQL script creates the stored procedure ORDERADDRD in an environment where the System naming convention is used. The stored procedure accepts a single parameter (ParAddressNo) and returns all Order Header and Address information as a result set for this specific address. The ORDER_HEADER table is joined with the ADDRESS_MASTER table. None of these tables is qualified with a schema. The SQL SELECT statement is dynamically prepared and executed. Listing 7: Routine ORDERADDRD with dynamic SQL Create Procedure SALESPGM/OrderAddrD (In ParAddressNo Integer) Dynamic Result Sets 1 Language SQL Begin Declare StringSQL01 VarChar(1024); Declare CsrC01 Cursor For DynSQLC02; Set StringSQL01 = 'Select Company, OrderNo, OrderDate, AddressNo, Name1, City From Order_Header Join Address_Master Using (AddressNo) Where AddressNo = ?'; Prepare DynSQLC01 From StringSQL01; Open CsrC01 Using ParAddressNo; End; Listing 8 shows two successful calls of the ORDERADDR procedure as well as the content of the result sets returned by each procedure call. First, the library list is explicitly set with the CHGLIBL command. Because System naming was used to create the procedure, the library list is searched to find the unqualified references. The ORDER_HEADER table is found in the SALESDB01 schema while the ADDRESS_MASTER table is found in the MASTERDB schema. The result set contains the data from company 1 proving the information that the ORDER_HEADER table in SALESDB01 schema was used. To get the order header data for company 2, the library list is changed with the CHGLIBL command, before calling the stored procedure. Listing 8: Dynamic SQL in SQL routines with System naming CL: CHGLIBL LIBL(SALESDB01 MASTERDB SALESPGM QGPL); Call SalesPGM/OrderAddrD(4); CL: CHGLIBL LIBL(SALESDB02 MASTERDB SALESPGM QGPL); Call SalesPGM/OrderAddrD(4); If the ORDERADDRD routine had been created with the SQL naming convention, the stored procedure call in Listing 8 would fail. This is because, with the SQL naming convention, the library list is not searched and the ORDER_HEADER and ADDRESS_MASTER tables are located in different schemas. Running static SQL statements Due to the fact that static SQL statements are hard-coded, they are analyzed by DB2 at compile time. Information about the SQL statement, such as the default schema is determined and stored in the newly created program or routine object. The naming convention used at compile time determines how the default schema will be computed for unqualified references on the static SQL statements. The default schema for static SQL statements can also be manually controlled by explicitly specifying the DFTRDBCOL (default collection / schema) parameter on the precompile command or by specifying the DFTRDBCOL parameter on the SET OPTION statement or clause. If an SQL routine is created from an SQL interface that had previously specified a default schema value using the SET SCHEMA statement or interface setting, DB2 for i will automatically add a SET OPTION clause with the DFTRDBCOL parameter to the SQL routine definition. In this situation, the resolution of unqualified references on static SQL statements is indirectly affected by the SET SCHEMA statement. Even though the default schema is determined at compile time, the unqualified DB2 object references are not checked for existence. The SQL routine or program can be generated successfully even when tables or views that are referenced do not exist on the system. Running static SQL statements with System naming To find unqualified database references for static SQL statements embedded in SQL routines or programs created with the System naming convention, DB2 searches the runtime definition of the library list even when the routine is called from an interface specifying SQL naming. Listing 9 shows the SQL script for creating the ORDERADDR procedure in the SALESPGM schema using System naming. The ORDERADDR procedure runs the same SQL statements in the ORDERADDRD routine (as shown in Listing 7: Routine ORDERADDRD with dynamic SQL), but this time, static SQL statements are used instead of dynamic SQL. Listing 9: Routine ORDERADDR with static SQL statements created using system naming Create Procedure SalesPGM/OrderAddr (In ParAddressNo Integer) Dynamic Result Sets 1 Language SQL Begin Declare CsrC01 Cursor For Select Company, OrderNo, OrderDate, AddressNo, Name1, City From Order_Header Join Address_Master using(AddressNo) Where AddressNo = ParAddressNo; Open CsrC01 ; End; The OrderAddr stored procedure was created in an environment where system naming was used, so the library list at run time will be searched to resolve the unqualified object references on the SELECT statement. The naming convention and other attributes about a SQL routine or embedded SQL program can be determined by using the PRTSQLINF (print SQL information) command or by accessing the SYSPROGRAMSTAT catalog view in QSYS2. Listing 10: Static SQL in an SQL routine created with system naming shows two calls of the ORDERADDR procedure and the returned result sets from an environment where SQL naming is used. Remember that calling the stored procedure in an environment where System naming is used will result in the same results, because DB2 uses the naming convention specified at compile time for the SQL routine or program. First, the library list is explicitly set with the CHGLIBL command. Additionally, the default schema is assigned to the value, SALESDB02. When calling the stored procedure, the library list is searched while the specified SALESDB02 schema is ignored as the procedure was created with the System naming convention. The ORDER_HEADER table is found in the SALESDB01 schema and the ADDRESS_MASTER table is found in the MASTERDB schema. The static Select statement is run successfully and the order header data for company 1 is returned. To get the order header data for company 2, the library list is changed, replacing the SALESDB01 schema with the SALESDB02 schema. As you can see from the result set returned by the second call to the procedure, the order header data for company 2 is returned. Listing 10: Static SQL in an SQL routine created with system naming CL: CHGLIBL LIBL(SALESDB01 MASTERDB SALESPGM QGPL); Set Schema SALESDB02; Call SalesPGM.OrderAddr(4); CL: CHGLIBL LIBL(SALESDB02 MASTERDB SALESPGM QGPL); Call SalesPGM.OrderAddr(4); Running static SQL statements using SQL naming When creating an SQL routine with the SQL naming convention, DB2 determines the default schema based on the SQL interface used to create the SQL routine. Listing 11: Routine ORDERADDR1 with static SQL created using SQL naming contains the SQL script to create the ORDERADDR1 stored procedure in the SALESPGM schema with SQL naming. This procedure returns the same result as the ORDERADDR procedure (Listing 9: Routine ), but the source code is slightly modified. Instead of joining the ORDER_HEADER table and the ADDRESS_MASTER table located in different schemas, the ORDER_HEADER_JOIN_ADDRESS_MASTER view is used. Before executing the CREATE PROCEDURE statement, the default schema is explicitly set to SALESDB01. In this case, the SALEDB01 is used for the DFTRDBCOL option on the SQL routine. The value of the DFTRDBCOL parameter can be checked by running the PRTSQLINF command or by accessing the SYSPROGRAMSTAT catalog view in QSYS2. Listing 11: Routine ORDERADDR1 with static SQL created using SQL naming Set Schema SALESDB01; Create Procedure SALESPGM.OrderAddr1 (In ParAddressNo Integer) Dynamic Result Sets 1 Language SQL Begin Declare CsrC01 Cursor For Select Company, OrderNo, OrderDate, AddressNo, Name1, City From Order_Header_Join_Address_Master Where AddressNo = ParAddressNo; Open CsrC01 ; End; As a result of the DFTRDBCOL parameter being set, the SALESDB01 schema will be used to resolve any unqualified DB2 object references on the static SQL statements that are embedded within the ORDERADDR1 stored procedure. When running the ORDERADDR1 stored procedure with either the System or SQL naming convention, the ORDER_HEADER_JOIN_ADDRESS_MASTER view is always retrieved from the SALESDB01 schema and the order header data for company 1 is returned. Notice that in Listing 12: Static SQL in a stored procedure created with SQL naming, the default schema is explicitly set to SALESDB02 before the call of the ORDERADDR1 stored procedure, and the data for company 1 is returned in the stored procedure's result set signifying that the view was found in the SALESDB01 schema. This behavior demonstrates that the DFTRDBCOL setting for the stored procedure is used, while the default schema value of SALESDB02 is ignored. Listing 12: Static SQL in a stored procedure created with SQL naming Set Schema SalesDB02; Call OrderAddr1(4); As the default schema for the static SQL statements is determined at compile time and the default schema for the dynamic SQL statements is determined at runtime, the same SQL statement embedded in the same routine can return different results if that statement is executed as both, static and dynamic SQL requests. By adding the DYNDFTCOL (dynamic default schema) parameter that is set to *YES to the SET OPTION statement, the dynamic SQL statements are forced to use the same default schema as that of the static SQL requests. In embedded SQL programs, the DYNDFTCOL parameter can also be specified in the compile command. Listing 13: shows an excerpt of a CREATE PROCEDURE statement. The default schema for static SQL statements is set to SALESDB01 with the DFTRDBCOL option and the dynamic SQL statements are forced to use the same default schema as that of the static SQL statements by setting the DYNDFTCOL option to *YES. Listing 13: SET OPTION Statement with DYNDFTCOL Create Procedure SALESPGM.OrderAddrX (In ParAddressNo Integer) Dynamic Result Sets 1 Language SQL Set Option DYNRDBCOL = SALESDB01, DYNDFTCOL = *YES Begin -- Routine Body – Source Code End; Unqualified data access in SQL triggers SQL triggers are a special kind of SQL routines that are linked to a table, a physical file, or view. A trigger program is activated by DB2 as soon as a row in the associated table or view is inserted, updated, or deleted. When creating a trigger program, it is not mandatory to qualify the DB2 object references in the static SQL statements, but the schemas for all the unqualified DB2 objects are resolved when the trigger is being created. Thus, the trigger behavior cannot be changed at run time by using a different library list or the default schema setting. Again, the naming convention determines how unqualified table, view, or alias references within the SQL trigger are resolved. Contrary to other SQL routines, a trigger is not created if any of the referenced DB2 tables, views, or aliases does not exist or is not found in the default schema. As the schemas are resolved and included in the trigger program object, the trigger can be activated and executed in any environment correctly even though the library list or the default schema is not set as expected. Listing 14: Trigger created with system naming shows the SQL script for the NEXT_POSITION Before Insert trigger, to determine the next order position, by adding 10 to the maximum order position (OrderPos) of the current order in the ORDER_DETAIL table. If it is the first position row for the order, the order position number is set to 10. Listing 14: Trigger created with system naming CL: CHGLIBL LIBL(SALESDB01 MASTERDB SALESPGM QGPL); Create Trigger SALESDB01/Next_Position Before INSERT on SALESDB01/ORDER_DETAIL Referencing NEW as N For Each Row Mode DB2ROW Select Coalesce(Max(OrderPos) + 10, 10) into N.OrderPos From Order_Detail where OrderNo = N.OrderNo; The ORDER_DETAIL table exists in the SALESDB01 schema as well as in the SALESDB02 schema. Because the current library list contains the SALESDB01 schema, the ORDER_DETAIL table is found in this schema, the trigger program is created and the resolved SALESDB01 schema name is stored in the trigger program object. The following figure shows the SQL statements returned by the trigger definition task in System i Navigator for the NEXT_POSITION trigger. The originally unqualified table reference is stored in composition with the resolved schema. Figure 2: Trigger NEXT_POSITION – routine body Aliases and views If your DB2 objects are spread over multiple schemas and you must use the SQL naming convention, then you may want to create views or aliases to support unqualified data access. An alias is a permanent database object that points either to a table or view that the referenced object can be in either the same schema or a different one. Starting with the IBM i 7.1 release, an alias can reference objects on a remote server. Aliases can also reference individual partitions of a partitioned table or a member of a multi-member physical file. An alias is created by running the CREATE ALIAS statement. If the referenced object on the CREATE ALIAS statement is not qualified, the schema is resolved and stored in the alias object. The schema resolution is dependent on the naming convention that is active on the interface and running the CREATE ALIAS statement. An SQL view is created by running the CREATE VIEW statement and is based on the SQL SELECT statement. Views are a very powerful instrument to simplify complex SQL requests and reduce source code. When creating a view based on a SELECT statement with unqualified object references, the schemas are resolved based on the naming convention and stored in the view object. The view is not generated if any of the unqualified DB2 objects is not found or does not exist. In the following example, System naming is used to create the ORDER_HEADER_JOIN_ADDRESS_MASTER view. First, the library list is explicitly set by executing the CHGLIBL command. In the view definition, the ORDER_HEADER table (which is either located in the SALESDB01 schema or the SALESDB02 schema) is joined with the ADDRESS_MASTER table, which is located in the MASTERDB schema. Listing 16: Create a view with system naming CL: CHGLIBL LIBL(SALESDB01 MASTERDB QGPL); Create View SALESDB01/Order_Header_Join_Address_Master as Select OrderNo, Company, OrderType, OrderDate, DelDate, DelType, AddressNo a.* from Order_Header h Join Address_Master a using(AddressNo); The view is created successfully because the ORDER_HEADER table is found in the SALESDB01 schema while the ADDRESS_MASTER table is found in the MASTERDB schema, and both the schemas are included in the current library list. The resolved schemas are added to the appropriate table references in the SELECT statement and stored in the view object. To prove this behavior, the following figure shows the Query Text, which is part of the System i Navigator View Definition output. Notice how the view definition now contains the schema names of SALESDB01 and MASTERDB on the table references. Figure 3: View created based on unqualified objects If the CREATE VIEW statement in Listing 16: Create a view with system naming is executed in an environment where SQL naming is used, it will fail, because only a single schema can be searched at a time to resolve the unqualified specified database objects. In Listing 17: Unqualified data access view with SQL naming the ORDER_HEADER_JOIN_ADDRESS_MASTER view created previously (in Listing 16: Create a view with system naming) is accessed with SQL naming. The default schema is explicitly set to SALESDB01 to analyze the data for company 1. Because the ORDER_HEADER_JOIN_ADDRESS_MASTER view is found in this schema and the view object now explicitly references the order header data located in the SALESDB01 schema, the address information located in the MASTERDB schema can be successfully returned. Listing 17: Unqualified data access view with SQL naming Set Schema SALESDB01; Select Company, OrderNo, OrderDate, AddressNo, Name1, City from Order_Header_Join_Address_Master; |1||100||04/28/2012||1||Fischer & Co||Dietzenbach| Accessing other database objects Until now, only unqualified data access has been discussed. However, there are also other objects such as stored procedures and user-defined functions (UDFs) that can be called in an SQL environment with or without explicitly specifying the schema. Similar to tables and views, these objects can be qualified by separating the schema and object, depending on the naming conventions with either a slash (/) for System naming or a period (.) for SQL naming. When the invocation of procedures and functions do not explicitly specify the schema, DB2 uses the SQL path instead of the default schema to find the procedures and functions. The SQL path is pretty similar to a library list, where multiple schemas can be listed and are searched in the same sequence in which they are specified. The initial value of the SQL path depends on the naming convention that is used for the first SQL statement within an activation group. If the System naming convention was used for the first SQL statement, the initial value for the SQL path is set to the special value, *LIBL. If the SQL naming convention was used, the SQL path includes the schemas in the following sequence: QSYS, QSYS2, SYSPROC, SYSIBMADM, USER special register. The SQL path can also be set or changed by executing the SET PATH statement. The SET PATH statement allows multiple schemas to be listed and separated by a comma. The schemas explicitly specified in the SET PATH statement may or may not be part of the current library list. The special value, *LIBL, can be used to set the SQL path to the current library list, even when the SQL naming convention is used. The default schema setting has no effect on the SQL path, which means that the default schema is not included in the SQL path. This is not an issue because the SQL path is not searched to find unqualified table, view, or alias objects. In the following example, the current path is first assigned to the current library list and then changed to a list of schemas. Listing 15: SET PATH SET PATH = *LIBL; SET PATH = QSYS, QSYS2, SALESPGM, HAUSER, HSCOMMON10; You should now understand the different behaviors when accessing database objects with either the System or SQL naming convention, especially to identify how unqualified access is handled differently for static and dynamic SQL statements. Because of these different behaviors, you should decide on a single naming convention method for accessing your database objects with SQL. - If you are working with typical IBM i applications, where the data is spread over multiple schemas and a library list is always used to resolve unqualified objects, the usage of System naming is probably the best option. - When working with the System naming convention and dynamic SQL statements, running the SET SCHEMA statement should be avoided. As soon as the SET SCHEMA statement is run, the library list is no longer searched to find unqualified tables, views, or aliases in dynamic SQL statements. - If System naming is not an option for you, but your data is located in multiple schemas and you want to avoid qualifying database objects, you should create aliases or views located in your primary data schema that point to the tables or views located in other schemas. - If your data is concentrated in a single schema or your application is developed to use different database systems, using SQL naming will be the best choice. - Database information finder - SQL messages and codes - IBM i - DB2 for i SQL Reference - 7.1 - IBM i – Database SQL programming - 7.1 - Stored Procedures, Triggers, and User-Defined Functions on DB2 Universal Database for iSeries - A Sensible Approach to Multi-Step DB2 for i Query Solutions - IBM developerWorks DB2 for i Forum - IBM developerWorks - IBM i Technology Updates
<urn:uuid:efe6be97-4ec7-4a49-81be-de480fe320a1>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/ibmi/library/i-system_sql2/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.830855
8,705
3.421875
3