text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Edge computing is designed to help when applications need a fast response, but are a long way from central IT resources. The most extreme example of this right now is a self-driving vehicle doing detailed science work, 62 million km away from Earth on the surface of Mars. NASA’s Perseverance rover has to handle its environment in real time, but signals take 12 minutes to go from there to NASA’s Mission Control. Besides the delay, Internet communications over that distance are unreliable, so Perseverance has to be prepared to make a lot of decisions locally. Despite these demands, the tech deployed to Mars is quite modest: the whole Perseverance rover is managed by the same type of PowerPC 750 processor which powered Apple’s Bondi Blue iMac back in 1998. There’s already an installed base on Mars: the Curiosity rover which landed in 2012 has the same processor, and is still in operation. But the Martian environment provides even more compelling reasons to stick with this technology. Due to its more recent design, the tiny Ingenuity drone copter, a passenger on the Mars mission, actually has a somewhat more powerful processor, the Snapdragon 801, which featured in 2014-era smartphones such as the Sony Xperia Z3. Yet all this kit is making unbelievable achievements. Even before it begins its scientific study, Perseverance handled its February 18 landing perfectly, analyzing wind patterns and the behavior of its heatshield during its supersonic entry into Mars atmosphere, and then using AI to identify a landing site and steer towards it for touchdown. The entry, descent and landing (EDL) had to be fully autonomous. The probe plunged through Mars’ atmosphere, at a speed of 12,500mph, and a peak temperature of 1,300°C, but NASA engineers on Earth could not take a hand at all, because the whole descent took less than seven minutes. Before NASA saw Perseverance start to fall, the rover was already sitting on land. This article appeared in Issue 40 of the DCD>Magazine. Subscribe for free today Entering the atmosphere NASA has operated five rovers on Mars, but Perseverance was the first to land with its eyes open. The heatshield and back shell was studded with 28 sensors; for the first four minutes of its descent, the searing temperature and pounding of the atmosphere were recorded by thermocouples, heat flux sensors, and pressure transducers. When the parachute opened, the heatshield and its sensors was jettisoned. The data was stored for transmission back to NASA - and represents the first detailed data from a Mars landing. This means future Mars missions can have heatshields designed with data from an actual landing, not a simulation. NASA expects this will allow them to make better heatshields which weigh 35 percent less. The pressure sensors will tell NASA about the real dynamics of the Martian atmosphere, including the low-altitude winds it hit as it slowed from supersonic speed. Future missions will be able to predict the weather, and landing with more control, in a smaller footprint. Perseverance’s landing target was 4.8 miles by 4.1 miles, already three times smaller than Curiosity's landing target of 15.5 x 12.4 miles. Thanks to the data it captured in February, the next probe will land in a space 30 percent smaller. What happened next is even more impressive. As the parachute opened, Perseverance’s radar measured its altitude. With the heatshield gone, the rover's cameras could scan the ground. On-board pattern recognition picked out features and looked for the landing spot When it slowed down to 200mph, the parachute cut loose, and the rover’s rockets took over, slowing it right down. At this point the lander vision system (LVS) took over, using “terrain relative navigation” (TRN) to match the rover’s camera images to a map of the terrain, and guide it to a smooth landing on the jumbled terrain of Jezero Crater. The system was tested as much as possible, with helicopters and suborbital rockets on Earth but, for obvious reasons, could not do a full live test till the day of the actual descent. Before the landing, NASA's TRN lead Swati Mohan said: “If we didn't have Terrain Relative Navigation, the probability of landing safely at Jezero Crater is about 80 to 85 percent. But with Mars 2020, we can actually bring that probability of success of landing safely at Jezero Crater all the way up to 99 percent every single time.” On the day, when she was the public face of NASA, calling out the telemetry, she said: “ it wasn't until after I called 'touchdown confirmed' and people started cheering that I realized, 'oh my gosh, we actually did this. We are actually on Mars. This is not a practice run. This is the real thing'." Jezero is the hardest landing site NASA has chosen for any Mars mission, and it picked it for a reason. Perseverance touched down in an ancient river delta that fed a lake that filled the crater three billion years ago. If there ever was life on Mars, here is the best place to look for signs of it. Perseverance is kitted out with scientific instruments to look for signs of ancient life in the delta deposits. It will also drill out and cache interesting rocks for recovery by a later mission. That mission will require whole new techniques, but is due to launch in 2026. Perseverance will also carry out a key test for possible manned Mars missions in future: testing the production of oxygen from the Martian atmosphere. All this work will be done more or less autonomously, with high level instructions from Earth bringing back a payload of scientific data. It really is the farthest Edge computing has ever gone, and embodies several extremes: low data rates, unreliable links, and a “right-sized” processor and memory architecture. It also has absolutely zero chance of any human maintenance and support visits. Compared to the tight budgets of Perseverance, Earth-bound Edge systems have an embarrassment of riches, with 5G networks, mains electricity and the possibility that someone might come by and reboot them. While NASA leads scientists round the world in learning from this Mars mission, digital infrastructure builders will be able to learn a lot about the limits of Edge computing.
<urn:uuid:55d163c9-3dfa-4b15-81ba-05379e1eab7c>
CC-MAIN-2024-38
https://direct.datacenterdynamics.com/en/analysis/the-edge-of-mars/
2024-09-13T04:35:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00191.warc.gz
en
0.953831
1,355
3.625
4
Small Form-Factor Pluggable (SFP) optical modules play a crucial role in modern networking environments, providing the flexibility and scalability necessary for efficient data transmission. Understanding the differences between 1G and 10G SFP modules is essential for network administrators and technicians to optimise network performance and ensure compatibility. In this blog post, we’ll delve into the fundamentals of SFP optical modules and explore various methods to distinguish between 1G and 10G SFPs. Introduction to SFP Optical Modules SFP optical modules, also known as Mini-GBIC (Gigabit Interface Converter), are hot-swappable transceivers commonly used in networking equipment. They facilitate the transmission of data over optical fibre cables and support various data rates and communication protocols, making them versatile components in modern networks. What is a 1g SFP module? A 1G SFP module, also known as a 1-gigabit small form-factor pluggable module, is a type of transceiver used in telecommunications and data communications for both telecommunication and data communications applications. It is designed to support communication over fibre optic or sometimes copper networking cables at speeds up to 1 gigabit per second (Gbps).There are many types of 1G SFP optical modules, mainly including single-mode and multimode. The single-mode optical module is suitable for long-distance transmission, while the multimode optical module is suitable for short-distance transmission. Additionally, there are differences between various brands and models of 1G SFP optical modules, such as the supported maximum distance, wavelength, interface type, etc., which need to be selected according to specific requirements. What is a 10g SFP module? The 10G SFP module, also known as a 10Gb small pluggable transceiver, is an upgraded version of the standard SFP module that supports data rates up to 10Gb per second. It usually consists of components such as packaging, interfaces, optical transceivers, and circuit boards, and transmits data between multimode and single-mode fibres through SFP + slots connected to network devices such as switches or routers. Compared with the 1G module, it is designed to handle larger bandwidths, making it very suitable for high-speed data transmission applications. How to Differentiate Between 1G and 10G SFP+ One of the primary methods to differentiate between 1G and 10G SFP modules is through physical identification. Manufacturers often label SFP modules with clear markings indicating their speed compatibility, such as “1G” or “10G”. These labels are typically located on the front or top surface of the module and provide a quick reference for identifying the speed rating. Another method involves checking the configuration settings of the SFP module within the networking device. Network administrators can access the device’s management interface and view the configured speed of the SFP port. This method provides direct insight into the operational speed of the SFP module. Optical Power Detection Optical power detection is a practical approach to differentiating between 1G and 10G SFP modules. By measuring the optical power output of the SFP module using a power meter or optical time-domain reflectometer (OTDR), technicians can determine whether the module operates at 1G or 10G speed. Higher optical power levels typically indicate 10G operation. Spectrum analysis involves examining the spectral characteristics of the optical signal transmitted by the SFP module. Technicians can use optical spectrum analysers to analyse the frequency components of the signal and identify patterns associated with specific data rates, such as 1G or 10G. This method provides a comprehensive understanding of the SFP module’s operational characteristics. In summary, differentiating between 1G and 10G SFP modules requires a combination of physical identification, configuration checks, optical power detection, and spectrum analysis. Network administrators and technicians should leverage these methods collectively to accurately identify the speed of SFP modules within their network infrastructure. By understanding the capabilities of SFP modules, organisations can optimise network performance and ensure seamless compatibility in diverse networking environments. Mastering SFP management is crucial for robust, efficient networks. Knowing how to distinguish between 1G and 10G SFPs enables better network setup and performance. If you require assistance in selecting the most suitable product, feel free to consult our sales team for expert guidance.
<urn:uuid:bf67a94d-a7b6-4ad8-843d-ddc9243ac6c9>
CC-MAIN-2024-38
https://www.fiber-optical-networking.com/tag/optical-transceiver/page/3
2024-09-13T05:52:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00191.warc.gz
en
0.895359
901
2.65625
3
Netacea’s Approach to Machine Learning: Unsupervised and Supervised Models Our world is driven by technological innovation. Recent years have seen many companies adopt artificial intelligence (AI) and machine learning technology to analyze larger data sets and perform more complex tasks with faster and more accurate results. This is not limited to technology-based industries such as computer science – now, many industries work continuously to enhance their technology to keep up with consumer expectations, with data-based decision-making often central to this drive. What is a machine learning model? Designed to imitate the way that humans learn, machine learning models make use of data and algorithms to gather knowledge and gradually improve accuracy over time. There are many machine learning applications; the two most commonly used and referred to machine learning models are supervised learning and unsupervised learning. The following outlines the differences between supervised and unsupervised machine learning programs, the benefits and drawbacks of each approach, and how Netacea uses a combination of the two machine learning models alongside anomaly detection, in our unique approach to bot management. Types of machine learning models Supervised machine learning Supervised learning models are characterized by their use of labelled data, which is used to teach algorithms to classify data, or predict accurate outcomes based on the labelled training data. Supervised learning algorithms can often be categorized into two types: Classification uses an algorithm to assign new data to specific categories, based on training data. Regression is a supervised machine learning algorithm used to predict continuous values, again based on the initial training data. Supervised learning models are best suited to situations where there is a set of available reference points on which to train the data. That being said, data is not always able to perfectly align within certain categories or labels; when this is the case unsupervised machine learning models can provide a solution. Unsupervised machine learning Unsupervised machine learning models are used to analyze and group sets of unlabelled data. Unsupervised machine learning models can help with pattern recognition for previously unseen or undetected patterns within data, without being explicitly programmed or requiring any human intervention. There are three types of unsupervised machine learning algorithms: - Dimensionality reduction “Clustering” looks for similarities and differences within the data and will then use this information to form groups or ‘clusters’ of data. Similarly, “association” is an unsupervised machine learning algorithm that uses different rules or rulesets to find relationships between variables within the data. If the number of features in a set of data is too high, “dimensionality reduction” can be used to reduce the number of inputs to a more manageable size. Dimensionality reduction is sometimes used as a pre-processing step for supervised machine learning models. Unsupervised machine learning models allow you to find and group previously unknown patterns within the data, without any initial manual input of labels or categories. Benefits and drawbacks of machine learning models While each approach has its merits, there are also some drawbacks to using one machine learning model over the other. Supervised learning is a simpler method of machine learning, beneficial in situations where the goal is to predict outcomes of new data, whilst already aware of the type of results to expect. Although supervised learning helps you collect data, make predictions, and optimize performance criteria following the input of initial labels, supervised machine learning models can be time consuming and often require expertise when it comes to labelling the initial inputs. Unsupervised learning is beneficial when the goal is to gather insights from large volumes of new, previously uncategorized data, or for anomaly detection. Whilst unsupervised learning is more adaptive and allows you to discover previously unknown patterns from data and find features for categorization, results from unsupervised machine learning models require expert human intervention and analysis to validate. Why Netacea uses both machine learning models Netacea’s multi-dimensional approach to bot management has our team of data scientists and bot experts using a combination of both supervised and unsupervised machine learning as well as anomaly detection to keep ahead of the continuously evolving bot threat. Supervised learning allows us to ask, “Does this attack match a known attack pattern?”. We can then compare the data streams from our clients with those within our Bot Attack Intelligence feed giving us the ability to stop known bot attacks, as well as predict and prevent future attacks from occurring. While supervised learning allows us to detect known attacks, unsupervised learning allows us to detect suspicious behavior, or patterns of behavior relating to new or previously unknown attack vectors by comparing the behavior of one user to others in the system. We use real-time clustering to group similar users, allowing us to spot when new clusters are created, highlight odd or atypical behavior, and constantly re-evaluate what a ‘normal’ pattern of behavior looks like.
<urn:uuid:6d687295-a154-409c-9c17-ca747fcdd17f>
CC-MAIN-2024-38
https://netacea.com/blog/netaceas-approach-to-machine-learning-unsupervised-and-supervised-models/
2024-09-20T15:06:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00491.warc.gz
en
0.933098
1,001
3.234375
3
Scientists Leverage AI and Quantum Biology to Improve Genome Editing Tool In a groundbreaking effort to optimize CRISPR Cas9 genome editing tools, scientists at Oak Ridge National Laboratory (ORNL) have harnessed the power of quantum biology, artificial intelligence (AI), and bioengineering. Their focus has been on enhancing the performance of CRISPR on microbes, which hold the potential for producing renewable fuels and chemicals. CRISPR Cas9, a renowned bioengineering instrument, has typically been less efficient when applied to microbes due to the reliance on models built from limited species data. “Many CRISPR tools have been developed for mammalian cells, fruit flies, or other model species. Few have been geared towards microbes where the chromosomal structures and sizes are very different,” said Carrie Eckert, leader of the Synthetic Biology group at ORNL. This observation led the team to seek a deeper understanding of cellular processes at the quantum level to improve guide RNA design, leading to further advancements in quantum biology. The ORNL scientists developed an explainable AI model called an iterative random forest, which was trained on a vast dataset of guide RNAs focusing on quantum chemical properties. This model has proven a significant step forward, identifying crucial nucleotide features to enhance the selection of effective guide RNAs for CRISPR technology. “The model helped us identify clues about the molecular mechanisms that underpin the efficiency of our guide RNAs,” shared computational systems biologist Erica Prates. This research, extending beyond the microbial world, has promising implications for drug development and other areas where CRISPR technology could be applied. “This paper even has implications across the human scale,” Eckert mentioned, highlighting the importance of accurate models for targeting specific genomic regions. By integrating quantum biology into their models, the ORNL team has opened a new frontier for CRISPR Cas9 enhancements across various species. Their work contributes to a broader understanding of functional genomics, linking genes to physical traits and propelling the efficiency of genome editing tools. “We’re greatly improving our predictions of guide RNA with this research,” Eckert concluded, underscoring the ambition to refine these tools for precision and rapidity in scientific research. Kenna Hughes-Castleberry is a staff writer at Inside Quantum Technology and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). Her writing beats include deep tech, quantum computing, and AI. Her work has been featured in Scientific American, Discover Magazine, New Scientist, Ars Technica, and more.
<urn:uuid:63f9e32b-5323-47d0-874b-c884bf349259>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/scientists-leverage-ai-and-quantum-biology-to-improve-genome-editing-tool/
2024-09-08T09:04:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00691.warc.gz
en
0.914774
527
3.3125
3
The Democratic Republic of the Congo (DRC) has been a focal point in the global effort to combat infectious diseases, particularly those endemic to the region. Mpox, a viral zoonosis caused by the monkeypox virus, has been one of the most persistent and challenging public health issues in Central Africa, particularly with the presence of Clade I, the more virulent strain of the virus. In this context, the PALM007 study, co-sponsored by the National Institutes of Health (NIH) and conducted in partnership with the DRC’s Institut National de Recherche Biomédicale (INRB), sought to evaluate the safety and efficacy of tecovirimat, an antiviral drug initially developed for smallpox, in treating Clade I mpox. This article delves into the critical findings of the PALM007 trial, the broader implications for public health in the DRC, and the future of mpox treatment globally. Concept | Simple Explanation | Importance | Mpox (Monkeypox) | A viral disease that causes skin rashes, fever, and other symptoms. It has been found in different parts of Africa. | Understanding mpox helps in recognizing and managing this disease, especially in affected regions. | Clade I Mpox | A type of mpox virus found in Central Africa that causes more severe illness. | Knowing about Clade I is important because it can cause more serious health problems. | Tecovirimat (TPOXX) | A medication developed to treat smallpox, now being tested to see if it can help treat mpox. | Understanding tecovirimat is crucial as it might help in treating mpox, though its effectiveness is still being studied. | PALM007 Study | A research project in the Democratic Republic of the Congo to see if tecovirimat helps people with mpox. | This study is key because it helps determine if tecovirimat is a good treatment option for mpox. | Safety of Tecovirimat | Tecovirimat was found to be safe to use, with no serious side effects. | Knowing that the drug is safe is important for making decisions about its use in treatments. | Effectiveness of Tecovirimat | The study showed that tecovirimat did not help clear up the mpox rash faster. | This finding is important because it shows that more research is needed to find better treatments. | Supportive Care | Medical care that helps with symptoms, like giving fluids and treating infections, which was provided to all study participants. | Supportive care is crucial as it greatly improved survival rates in the study. | Mortality Rate | The percentage of people who died during the study was lower than usual for mpox in the region. | This shows that with good medical care, people with mpox have a better chance of surviving. | Future Research | Ongoing studies are looking into other possible treatments and more about how tecovirimat works in different groups. | Future research is essential to find better treatments for mpox and help those affected. | Background: Mpox and Its Endemic Presence in Central Africa Mpox has been present in West, Central, and East Africa for decades, with the first human case identified in 1970. The virus belongs to the Orthopoxvirus genus, which also includes the variola virus that causes smallpox. Two distinct genetic clades of the monkeypox virus have been identified: Clade I, which is endemic to Central Africa and is known to cause more severe illness, and Clade II, found in West Africa and typically associated with milder disease. The global mpox outbreak in 2022 was linked to a Clade II subtype, underscoring the virus’s potential for international spread and the importance of continued research into effective treatments. The Democratic Republic of the Congo has seen an increase in reported cases of Clade I mpox in recent years, particularly among vulnerable populations such as children, people with compromised immune systems, and pregnant individuals. A 2023 report from the Centers for Disease Control and Prevention (CDC) indicated that 67% of suspected mpox cases and 78% of suspected mpox deaths in the DRC occurred in individuals aged 15 years or younger, highlighting the critical need for effective therapeutic interventions. Tecovirimat: A Potential Treatment for Mpox Tecovirimat, also known by its commercial name TPOXX, was originally developed and approved by the U.S. Food and Drug Administration (FDA) for the treatment of smallpox. Given the close relationship between the monkeypox virus and the variola virus, tecovirimat has been considered a promising candidate for mpox treatment. The drug’s safety and efficacy against mpox, however, have not been fully established, leading to its inclusion in investigational trials such as PALM007 in the DRC. The Antiviral Tecovirimat: Safe but Ineffective in Clade I Mpox Resolution A key focus of the PALM007 study was the evaluation of tecovirimat, an antiviral drug initially developed for smallpox, in treating Clade I mpox in the Democratic Republic of the Congo (DRC). The study’s findings highlighted a critical issue: while tecovirimat was found to be safe for use, it did not improve the resolution of mpox lesions caused by Clade I in the trial participants. Despite the drug being well-tolerated, with no serious adverse events reported among those who received it, the primary outcome of the trial—a reduction in the duration of mpox lesions—was not achieved. The analysis showed that the time to lesion resolution was similar between the participants who received tecovirimat and those who were given a placebo. This result was disappointing, particularly given the urgent need for effective treatments in regions where Clade I mpox is endemic and causes severe illness. However, the trial did reveal a positive aspect: the overall mortality rate among participants was significantly lower than the historical averages for mpox in the DRC. This suggests that, while tecovirimat may not directly influence lesion resolution, the high-quality supportive care provided during the study played a crucial role in improving survival rates. The outcome of this trial underscores the complexity of treating Clade I mpox and highlights the need for continued research into alternative therapeutic options. While tecovirimat’s safety profile remains a significant finding, its lack of efficacy in this particular context calls for further investigation and the exploration of other antiviral candidates that might offer more promise in the treatment of severe mpox cases in Central Africa. This chapter of the PALM007 study serves as a sobering reminder that even well-established antiviral drugs like tecovirimat may not always perform as expected in different clinical settings or against varying strains of a virus. It also emphasizes the importance of rigorous clinical trials in endemic regions to provide evidence-based guidance for treatment strategies, ensuring that the global response to diseases like mpox is informed by data and tailored to the specific needs of affected populations. The PALM007 Study: Objectives and Methodology The PALM007 trial was launched in October 2022 as a collaborative effort between NIAID and INRB to evaluate the safety and efficacy of tecovirimat in treating Clade I mpox among adults and children in the DRC. The study enrolled 597 participants with laboratory-confirmed mpox at two sites: Tunda in Maniema province and Kole in Sankuru province. Participants were randomly assigned to receive either tecovirimat or a placebo and were admitted to a hospital for a minimum of 14 days. During this time, they received comprehensive supportive care, including nutrition, hydration, and treatment for secondary infections, while being closely monitored for safety and the resolution of mpox lesions. The study was designed as a randomized, placebo-controlled trial, the gold standard in clinical research, to ensure the reliability of the results. The primary endpoints included the duration of mpox lesions and overall mortality, with secondary endpoints focusing on drug-related adverse events and the speed of lesion resolution. Initial Findings: Efficacy and Safety of Tecovirimat The preliminary analysis of the PALM007 trial data revealed that tecovirimat was well-tolerated among the study participants, with no serious adverse events attributed to the drug. However, the drug did not significantly reduce the duration of mpox lesions compared to the placebo. Despite this, the study’s overall mortality rate of 1.7% among enrollees was notably lower than the 3.6% or higher mortality rate typically reported for mpox cases in the DRC. This suggests that high-quality supportive care, as provided during the trial, can substantially improve outcomes for people with mpox, regardless of whether they receive tecovirimat or not. Challenges and Considerations in the PALM007 Trial While the initial findings from PALM007 may seem disappointing in terms of tecovirimat’s efficacy in lesion resolution, they offer valuable insights into the complexities of treating mpox, particularly Clade I in Central Africa. One key challenge is the variability in clinical outcomes based on factors such as the severity of the disease at the time of enrollment, participant characteristics, and the specific genetic variant of mpox being treated. Further analysis is needed to determine whether certain subgroups of patients might benefit more from tecovirimat or whether alternative therapeutic approaches should be prioritized. Additionally, the study underscores the importance of supportive care in managing mpox. The lower-than-expected mortality rate among trial participants highlights the potential for improved survival rates with appropriate medical intervention, even in the absence of a highly effective antiviral treatment. This finding is particularly relevant in resource-limited settings like the DRC, where access to advanced medical care is often restricted. Broader Implications for Mpox Treatment in Central Africa The results of the PALM007 trial have significant implications for the management of mpox in Central Africa and beyond. While tecovirimat may not be the definitive answer to Clade I mpox treatment, the study has contributed to a better understanding of the disease and the role of supportive care in improving patient outcomes. As mpox cases continue to rise in the DRC and other Central African countries, there is an urgent need to identify and develop additional therapeutic options that can more effectively target the virus. The trial also highlights the critical role of international collaboration in addressing infectious diseases in endemic regions. The partnership between NIAID, INRB, and other global health organizations in conducting the PALM007 study demonstrates the importance of pooling resources and expertise to tackle complex public health challenges. Such collaborations are essential for advancing our knowledge of diseases like mpox and for developing interventions that can be deployed in regions where they are most needed. Ongoing and Future Research on Tecovirimat and Mpox The findings from PALM007 have not deterred researchers from continuing to explore the potential of tecovirimat in treating mpox. Ongoing studies, such as the international STOMP trial, are examining the drug’s safety and efficacy against Clade II mpox, which caused the global outbreak in 2022. The UNITY study, sponsored by ANRS Emerging Infectious Diseases, is conducting similar research in Argentina, Brazil, and Switzerland. These studies are critical for determining whether tecovirimat can be an effective treatment for mpox across different populations and viral clades. Moreover, the PALM007 trial has paved the way for further research into the genetic and immunological factors that influence mpox outcomes. By analyzing the trial data in greater detail, researchers hope to identify specific patient subgroups that may benefit from targeted therapies or tailored supportive care strategies. This could lead to more personalized approaches to mpox treatment, improving outcomes for those most at risk of severe disease. The Role of Public Health Infrastructure in Mpox Management One of the key lessons from the PALM007 trial is the importance of robust public health infrastructure in managing outbreaks of diseases like mpox. The success of the trial in providing high-quality supportive care to participants, even in remote regions of the DRC, underscores the value of investing in healthcare systems that can respond effectively to infectious disease threats. Strengthening public health infrastructure in Central Africa is crucial not only for managing mpox but also for addressing other endemic diseases that pose a significant burden on the region’s population. The trial also highlights the need for continued surveillance and reporting of mpox cases to ensure timely detection and response to outbreaks. Public health authorities in the DRC and other affected countries must be supported in their efforts to monitor the spread of the virus and to implement control measures that can mitigate its impact on vulnerable populations. Conclusion: A Path Forward for Mpox Treatment and Research The PALM007 study has provided valuable insights into the treatment of Clade I mpox in the DRC, even if the results were not as conclusive as researchers had hoped. While tecovirimat did not significantly shorten the duration of mpox lesions, the trial demonstrated the critical importance of supportive care in improving patient outcomes. The findings also underscore the need for continued research into alternative therapeutic options and the potential for personalized treatment approaches based on patient characteristics and disease severity. As the global community continues to grapple with the challenges posed by emerging infectious diseases, the lessons learned from the PALM007 trial will be instrumental in guiding future research and public health interventions. By building on the knowledge gained from this study, researchers and healthcare providers can work towards developing more effective treatments for mpox and other diseases that disproportionately affect vulnerable populations in Central Africa and beyond. In conclusion, while the journey to finding an optimal treatment for mpox continues, the PALM007 trial marks a significant step forward in our understanding of the disease and the strategies needed to combat it. With ongoing international collaboration and a commitment to research, there is hope that we will eventually develop interventions that can alleviate the burden of mpox and improve the lives of those affected by this challenging and persistent disease. reference : https://www.nih.gov/news-events/news-releases/antiviral-tecovirimat-safe-did-not-improve-clade-i-mpox-resolution-democratic-republic-congo
<urn:uuid:23f1aadf-ea75-4650-876b-9e3a08a3c22b>
CC-MAIN-2024-38
https://debuglies.com/2024/08/17/evaluating-tecovirimats-efficacy-in-mpox-treatment-in-the-democratic-republic-of-the-congo-a-critical-analysis-of-the-palm007-study/
2024-09-13T07:42:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00291.warc.gz
en
0.947584
3,010
3.6875
4
What about this course? This course will provide a gentle introduction to programming concepts and will cover the benefits of learning how to program, why Python is a great language to learn, and the software development lifecycle at a very high level. The course will also cover a brief history of past methods of automated interactions with network hardware, and compare those methods with more modern methods available today, such as NETCONF as well as embedded Python interpreters in network hardware operating systems. This course is based on Python Release 2.7. Instructor for this course This course is composed by the following modules Why Learn to Program? Does any of this matter? Tcl and Expect Telnet & SSH What is NETCONF? Comparing NETCONF and Other Methods What is YANG? Python Embedded in the OS Common Course Questions If you have a question you don’t see on this list, please visit our Frequently Asked Questions page by clicking the button below. If you’d prefer getting in touch with one of our experts, we encourage you to call one of the numbers above or fill out our contact form. Do you offer training for all student levels?
<urn:uuid:8f391a61-867c-4deb-a3fa-c7570393f7d0>
CC-MAIN-2024-38
https://ine.com/learning/courses/introduction-to-python-programming-for-network-engineers
2024-09-14T14:55:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00191.warc.gz
en
0.916897
250
3.0625
3
What is data classification? Data classification is a way of grouping data to ensure easy sorting, retrieval, and prioritization. The data is divided into categories, and a label, or a tag, is applied to make it easily searchable. The three commonly used types of data classification are: - Content-based, which is done solely based on the information involved, - Context-based, which takes into account the location of the data, the owner, the application it is used in, and others, - User-based, which requires users to label data based on internal rules. Benefits of data classification 1. Risk management According to AWS, data classification is a foundational step in cybersecurity risk management. The reason behind this is that applying labels to data and establishing security requirements such as: - the level of confidentiality, - the need for integrity checking, - the sensitivity of data, can help your company manage risks efficiently. When implementing compliance with international standards, you must know what type of data your company is managing and storing. Data classification should be done correctly to understand which of the data you're storing is confidential/sensitive. You cannot comply with recognized frameworks unless you correctly handle confidential data (and you cannot do this unless you know which data is confidential). Let’s look at a scenario – if you’re storing customer PII, but you are not aware of the criticality of that data, you may not even think of protecting it, for example, by encrypting it. Therefore, your company may not be compliant with standards like: - SOC 2, - GDPR, and others. Organizing data into categories and using labels can help you maintain: - confidentiality, because you will turn your focus to the most sensitive data, - integrity, because you can mark the need for integrity as high for some data using labels, - availability, which can be explicitly ensured for data that needs to be highly available and is labeled as such. Much of the data that used to be saved on-premises is now saved and processed in the cloud, in databases, assets of type storage, and others. Therefore, data classification should be used in the cloud. Let’s look at what the cloud industry offers to help you easily and accurately classify your data. Data classification – an industry overview We will look at the top 3 cloud vendors – AWS, Microsoft Azure, and Google Cloud – to see how data classification is implemented and the different types of tags that can be applied depending on the cloud service selected. 1. Amazon Web Services In the AWS documentation, a three-tiered classification is recommended, with the following tag names: - Secret and above. Moreover, AWS presents the following three labels used by NIST (National Institute of Standards and Technology), a United States government agency, and recommends them: which classify the impact a potential data breach would cause on that data. However, these tags are recommendations and users can implement their own tags. Later in this article, you will find best practices on how to implement labeling for your cloud environment. When you create a resource in AWS, you can add tags (key-value pairs) to the resource to associate it to labels used in data classification. 2. Google Cloud Platform For data classification in Google Cloud, we can find both labels and tags, which are two different things. A label is described as a key-value pair that you can create using the Resource Manager API and the Google Cloud console. These can be used to separate resources in terms of billing, to add information about resource state, and so on. Tags, however, are the tools that allow Google Cloud customers to classify data and establish rules based on their classification. The difference between labels and tags in Google Cloud is that labels are simply metadata added to resources, while tags categorize assets and can be used when defining policies and rules (for example, who is allowed to access a certain asset) in your Google Cloud environment. 3. Microsoft Azure In Microsoft Azure, we can use the Microsoft Purview service to ensure data labeling of cloud assets. Microsoft Purview is a solution offered by Microsoft that brings together your cloud, on-premises, and SaaS data and helps you manage it through different solutions: - Data Map, - Data Catalog, - Data Sharing, - Data Estate Insights, and others. An important aspect is that Data Map powers most of the solutions offered by Microsoft Purview and is a paid service. In terms of data classification, there are a few services that can help you manage your cloud resources: - the Microsoft Purview Data Catalog uses sensitivity labels that can be added to cloud assets. - the Microsoft Purview Information Protection service, which has the following features: data classification, trainable classifiers, sensitive information types, - the Azure Information Protection unified labeling client, a downloadable client that also provides sensitivity labels. Microsoft Azure suggests that you apply tags that contain additional information about resources (do not include any PII or sensitive data in the tags) to: - add context to your resources and understand them better, - be able to use complex filters. Azure also suggests a “Data classification” tag to describe the sensitivity of data stored or processed by a resource. If an organization does not have their own labels defined, they may use the following values supplied by Microsoft: - Highly confidential. Moreover, Azure recommends that you also use a formal data classification process. In the next section, we will explain best practices to keep in mind when classifying your resources. How do you implement data classification? An important rule to follow when implementing data classification is that the entire organization should use the same classification tags/labels. Using a policy or a procedure for this process that regulates: - the classification process as a whole, - the tags’ names, and others is essential to ensuring consistent data classification. In Cyscale, you can find the out-of-the-box “Data Management” policy, which contains the “Data Classification” procedure to guide you in this process. Moreover, you can create your own custom policy with any specific rules you want to add. Considering the benefits of data classification, implement this feature for your cloud environment. Use a company-level classification policy and add tags to your cloud assets to enable easy sorting, retrieval, and prioritization. In Cyscale, you can use this feature to: - easily filter assets based on tags, - highlight any irregularities regarding your most sensitive assets, and - prioritize remediation for the most urgent findings. Cloud Security Analyst at Cyscale Sabrina Lupsan merges her academic knowledge in Information Security with practical research to analyze and strengthen cloud security. At Cyscale, she leverages her Azure Security Engineer certification and her Master's in Information Security to keep the company's services at the leading edge of cybersecurity developments. Receive our latest blog posts and product updates.
<urn:uuid:31b1e809-4922-4426-8d5f-a05a7a362113>
CC-MAIN-2024-38
https://cyscale.com/blog/data-classification/
2024-09-19T10:43:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00691.warc.gz
en
0.907679
1,465
3.1875
3
Welcome to a journey into the annals of computing history. Today’s topic is a true classic – Control Program for Microcomputers, better known as CP/M. What is CP/M? Control Program for Microcomputers, or CP/M, is an operating system developed in the 1970s. Created by Gary Kildall of Digital Research Inc., it was the first popular microcomputer operating system. Designed to be hardware-independent, it could be used on any machine with a Zilog Z80 processor. What devices used CP/M? CP/M was widely adopted and found its way into various personal computers of the late 1970s and early 1980s. These included systems like the Osborne 1, Kaypro II, and the original IBM PC. The hardware-independence of CP/M made it a popular choice among manufacturers and users alike. Is CP/M still around? While CP/M has largely faded from mainstream use, it remains a significant piece of computing history. It is no longer in active development or use, but its influence can still be seen in today’s operating systems. There are also hobbyist communities and retro computing enthusiasts who continue to explore and enjoy CP/M. Is MS-DOS the same as CP/M? Many people often wonder if Microsoft’s MS-DOS is the same as CP/M. While there are similarities, they are not the same. MS-DOS was inspired by CP/M and shares many of its concepts. However, it was designed specifically for the Intel 8086 processor, unlike the hardware-independent CP/M. In conclusion, CP/M played a pivotal role in the history of computing. It helped standardize the concept of an operating system and set the stage for future developments. While it may no longer be in use, the legacy of CP/M lives on, and its influence can still be seen in the world of computing today.
<urn:uuid:7b3b77e0-37eb-4ddb-9bba-ec7918c20eef>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/it-service-management/what-is-cp-m/
2024-09-19T11:21:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00691.warc.gz
en
0.964007
403
3.15625
3
Vulkan is a new low-level API that developers can use to access the GPU. This can be used instead of OpenGL or Direct3D. It is essentially the successor to OpenGL as the standard is created by the Khronos Group, a standards organization. Khronos created Vulkan to be an open standard royalty-free. Developers are able to take advantage of Vulkan’s reduced CPU overhead and efficient performance with games, applications, and mobile. Version 1.0 of the specification was released today and the first Vulkan SDK, LunarG, was also released for Windows and Linux. Vulkan is available on multiple versions of Microsoft Windows from Windows 7 to Windows 10, and has been adopted as a native rendering and compute API by platforms including Linux, SteamOS, Tizen and Android. AMD, ARM, Intel, NVIDIA, and other industry pillars have been quick to adopt the standard. NVIDIA offers beta support for Vulkan in Windows driver version 356.39 and Linux driver version 355.00.26. AMD similarly offers beta support for Vulkan with beta drivers for Windows 7 – Windows 10.
<urn:uuid:28939590-b35e-480e-9338-91f5a50aee1e>
CC-MAIN-2024-38
https://www.404techsupport.com/2016/02/17/khronos-vulkan/
2024-09-12T04:48:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00491.warc.gz
en
0.929086
224
2.640625
3
If a picture is worth a thousand words, then a video is priceless. And in today's world, where security is paramount, capturing high-quality video footage has become a necessity. This is where Network Video Recorders (NVRs) come into play. In this blog post, we will explore what exactly a Network Video Recorder is, how it works, and why it is an essential component in any security camera setup. Introducing the Network Video Recorder (NVR) When it comes to surveilling your home or business, security cameras are a crucial tool in keeping a watchful eye on your surroundings. However, simply having cameras in place is not enough. Without a reliable method to store and manage the footage, the entire surveillance system may be rendered ineffective. This is where a Network Video Recorder (NVR) comes into play. An NVR is a specialized device that is designed to record, store, and manage video footage captured by security cameras. Just like a hard drive on your computer, an NVR provides a centralized location for storing and organizing video data. But unlike traditional analog recording systems, NVRs offer a wide range of advanced features that enhance the overall surveillance experience. How Does an NVR Work? At its core, an NVR receives video feeds from one or more IP (Internet Protocol) cameras. IP cameras are capable of sending video data over a network, making them highly flexible and easy to install. The NVR then processes and compresses the video footage to optimize storage space and improve playback performance. One of the key advantages of an NVR is its ability to handle multiple cameras simultaneously. This means that you can connect multiple cameras to a single NVR, allowing for comprehensive coverage of your property. Additionally, NVRs can often support higher resolution cameras, such as 4K Ultra HD, ensuring that you capture every detail with exceptional clarity. Once the NVR has processed and compressed the video footage, it is stored on internal hard drives within the device. These hard drives can vary in capacity, allowing you to choose the storage size that best suits your needs. Some NVRs also offer the option to expand storage capacity through additional hard drives or network-attached storage devices. To access the recorded footage, users can connect to the NVR through a web-based interface. This allows for remote viewing and playback of the video files from anywhere with an internet connection. Some NVRs also support mobile apps, enabling users to monitor their cameras and view live or recorded footage directly from their smartphones or tablets. The Benefits of Using an NVR Now that we have a basic understanding of how an NVR works, let's explore some of the key benefits it provides: 1. Enhanced Video Quality With an NVR, you can take full advantage of the high-resolution capabilities of IP cameras. This means that you can capture clear, detailed video footage, ensuring that no important detail goes unnoticed. Whether you want to identify a suspicious individual or review an incident, the enhanced video quality provided by an NVR is invaluable. 2. Greater Storage Capacity Unlike analog recording systems that rely on physical tapes or DVDs, NVRs utilize internal hard drives for storing video footage. This allows for significantly larger storage capacity, which means you can store weeks or even months worth of video data without worrying about running out of space. With larger storage capacity, you can maintain a comprehensive video archive for future reference if needed. 3. Easy Retrieval and Playback Navigating through hours of video footage can be tedious, especially when you are trying to find a specific event. NVRs simplify this process by offering advanced search and playback features. Users can easily search for specific dates, times, or events, and quickly retrieve the desired footage. This saves time and enhances efficiency when reviewing footage for investigations or evidence gathering. 4. Remote Monitoring and Accessibility One of the greatest advantages of an NVR is the ability to remotely access your cameras and recorded footage. Whether you are traveling, at work, or simply away from your property, you can conveniently monitor your cameras in real-time from any device with an internet connection. This provides peace of mind and allows you to stay connected with what matters most to you, no matter where you are. 5. Integration with Other Security Systems NVRs are not standalone devices. They can be seamlessly integrated with other security systems, such as alarms, access control systems, and motion sensors. This integration enables a comprehensive and unified security solution that can be easily managed from a single interface. With the ability to control and monitor multiple security components from one place, you can streamline your security operations and respond effectively to any potential threats. Choosing the Right NVR for Your Needs When it comes to selecting an NVR for your security camera setup, there are several factors to consider. Here are a few tips to help you make an informed decision: - Number of Cameras: Determine the number of cameras you need to connect to the NVR. Make sure the NVR you choose can support the desired number of cameras simultaneously. - Storage Capacity: Assess your storage requirements based on the number of cameras, desired video quality, and recording duration. Consider factors such as motion-activated recording and scheduled recording to optimize storage space. Each NVR we sell comes with a pre-installed hard drive for your convenience. - Remote Access and Mobile Apps: If remote monitoring is important to you, ensure that the NVR supports remote access through a web-based interface or mobile apps. Check for compatibility with your preferred devices and operating systems. - Integration and Expandability: If you plan to integrate your NVR with other security systems, verify compatibility and expandability options. Ensure that the NVR supports the necessary protocols and interfaces for seamless integration. Wrapping Up NVRs In conclusion, a Security Network Video Recorder (NVR) is an essential component of any security camera setup. It provides centralized storage, efficient management, and easy access to video footage captured by IP cameras. NVRs offer enhanced video quality, greater storage capacity, easy retrieval and playback, remote monitoring capabilities, and the ability to integrate with other security systems. When choosing an NVR, consider factors such as the number of cameras, storage capacity, remote access options, and integration capabilities. By selecting the right NVR for your specific needs, you can ensure that your surveillance system is reliable, efficient, and effective in keeping your home or business safe and secure. So, whether you are implementing a security system for your new home or upgrading the surveillance setup for your business, remember to include a Network Video Recorder – your trusted companion in capturing and managing valuable video footage.
<urn:uuid:9d25c74b-5bc2-4e52-81e6-222f7d0363ae>
CC-MAIN-2024-38
https://www.cctvsecuritypros.com/blog/what-is-a-network-video-recorder-for-surveillance-/
2024-09-19T14:23:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00791.warc.gz
en
0.928688
1,377
2.609375
3
Guillen D.,CSIC - Institute of Environmental Assessment And Water Research | Ginebreda A.,CSIC - Institute of Environmental Assessment And Water Research | Farre M.,CSIC - Institute of Environmental Assessment And Water Research | Darbra R.M.,Polytechnic University of Catalonia | And 3 more authors. Science of the Total Environment | Year: 2012 The extensive and intensive use of chemicals in our developed, highly technological society includes more than 100,000 chemical substances. Significant scientific evidence has lead to the recognition that their improper use and release may result in undesirable and harmful side-effects on both the human and ecosystem health. To cope with them, appropriate risk assessment processes and related prioritization schemes have been developed in order to provide the necessary scientific support for regulatory procedures. In the present paper, two of the elements that constitute the core of risk assessment, namely occurrence and hazard effects, have been discussed. Recent advances in analytical chemistry (sample pre-treatment and instrumental equipment, etc.) have allowed for more comprehensive monitoring of environmental pollution reaching limits of detection up to sub ngL-1. Alternative to analytical measurements, occurrence models can provide risk managers with a very interesting approach for estimating environmental concentrations from real or hypothetical scenarios. The most representative prioritization schemes used for issuing lists of concerning chemicals have also been examined and put in the context of existing environmental policies for protection strategies and regulations. Finally, new challenges in the field of risk-assessment have been outlined, including those posed by new materials (i.e., nanomaterials), transformation products, multi-chemical exposure, or extension of the risk assessment process to the whole ecosystem. © 2012 Elsevier B.V. Source Until now, 3D printing has been limited to various types of solids; however, a new study has shown how to print highly complex hydraulic systems from both solids and liquids that makes it easier to build labs on a chip for medical and pharmaceutical uses, and liquid channels for chemical testing and analysis. In what could be a significant move towards the rapid fabrication of functional machines, such robots also have potential applications in areas such as facilitating disaster relief in dangerous situations. Scientists from the Computer Science and Artificial Intelligence Laboratory at MIT automatically produced 3D printed dynamic robot bodies and parts that needed no previous assembly from a commercially available multi-material 3D inkjet printer based on only a single-step process. Using a 3D printer to produce robots is a viable alternative to doing so by hand, which requires huge effort, or through automation, which has not yet reached the necessary level of sophistication. This “printable hydraulics” approach, which provides a design template that can be tailored for robots of different sizes, shapes and functions, was used to produce a small six-legged robot with a dozen hydraulic pumps embedded within it, only requiring the minimal addition of the electronics and a battery before being operational. As team leader Daniela Rus points out, “3D printing offers a way forward, allowing us to automatically produce complex, functional, hydraulically powered robots that can be put to immediate use”. Such printable robots could also be quickly and cheaply produced, and have less electronic components than standard robots. A paper on their research was recently accepted for the 2016 IEEE International Conference on Robotics and Automation (ICRA). In the technique, the printer deposited individual droplets of material of only 20–30 microns in diameter, by layer from the bottom up, with different materials being deposited in different parts for each layer. A high-intensity UV light then solidified the materials but not the liquids. The printer can use many types of material, although each layer is made up of a photopolymer that is solid and a non-curing material that is liquid. They showcased the technique by 3D printing linear bellows actuators, gear pumps, soft grippers as well as the hexapod robot. The hexapod weighed about 1.5 pounds and was under six inches long, and moved using a single DC motor turning a crankshaft that pumps fluid to the robot’s legs. However, it took 22 hours to print, not long considering its complexity, but the team hopes this can be achieved faster by improving on the engineering and resolution of the printers. News Article | February 11, 2016 A team of researchers led by Prof. Davide Scaramuzza have developed a way to train drones to follow forest trails in an effort to assist search and rescue missions for lost hikers. According to the research, Prof. Scaramuzza's team figured out a method of machine learning through Deep Neural Networks (DNNs) which enables an unsupervised drone to determine the direction of a path using an on-board camera. The system was created by first setting up a hiker with three cameras that cover about 180 degrees of visual information: one positioned straight ahead, one placed 30 degrees to the left and the other 30 degrees to the right so that there is a slight overlap in the captured video. The hiker was instructed to always look ahead in the direction of the path since the front camera will provide the information for the trail. The raw data (PDF) used was eight hours' worth of footage of approximately 7 kilometers of hiking trail between an altitude of 300 and 1,200 meters. The footages were taken at different times of the day and under different weather conditions. The results were surprising when it was tested because the autonomous quadcopter was able to navigate a completely new trail and stay on course as well as, and sometimes even better, than humans. The same path and test was done with two humans against the drone to determine how effective the DNNs based machine learning was and, on one test, the quadcopter was successful 85.2 percent of the time as opposed to the two people who were accurate 86.5 and 82 percent of the time. A second test with different conditions resulted in the quadcopter being accurate 95 percent of the time when the two people were 91 and 88 percent accurate. Watch the video explanation of the research below. "Now that our drones have learned to recognize and follow forest trails, we must teach them to recognize humans," Prof. Scaramuzza said. A drone that could recognize proper trails and humans will certainly be of great assistance to rescue operations, moreso if it can also detect vital signs like the Lynx 6-A. The research titled "A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots" appeared in the IEEE Robotics and Automation Letters (RA-L) and will be presented during the IEEE International Conference on Robotics and Automation (ICRA'16) in May. In a pair of projects announced this week, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrated software that allow drones to stop on a dime to make hairpin movements over, under, and around some 26 distinct obstacles in a simulated "forest." One team's video shows a small quadrotor doing donuts and figure-eights through an obstacle course of strings and PVC pipes. Weighing just over an ounce and clocking in at 3 and a half inches from rotor to rotor, the drone can fly through the 10-square-foot space at speeds upwards of 1 meter per second. The team's algorithms, which are available online and were previously used to plan footsteps for CSAIL's Atlas robot at last year's DARPA Robotics Challenge, segment space into "obstacle-free regions" and then link them together to find a single collision-free route. "Rather than plan paths based on the number of obstacles in the environment, it's much more manageable to look at the inverse: the segments of space that are 'free' for the drone to travel through," says recent graduate Benoit Landry '14 MNG '15, who was first author on a related paper just accepted to the IEEE International Conference on Robotics and Automation (ICRA). "Using free-space segments is a more 'glass-half-full' approach that works far better for drones in small, cluttered spaces." In a second CSAIL project, PhD student Anirudha Majumdar showed off a fixed-wing plane that is guaranteed to avoid obstacles without any advanced knowledge of the space, and even in the face of wind gusts and other dynamics. His approach was to pre-program a library of dozens of distinct "funnels" that represent the worst-case behavior of the system, calculated via a rigorous verification algorithm. "As the drone flies, it continuously searches through the library to stitch together a series of paths that are computationally guaranteed to avoid obstacles," says Majumdar, who was lead author on a related technical report. "Many of the individual funnels will not be collision-free, but with a large-enough library you can be certain that your route will be clear." Both papers were co-authored by MIT professor Russ Tedrake; the ICRA paper, which will be presented in May in Sweden, was also co-written by PhD students Robin Deits and Peter R. Florence. A bird might make it seem simple, but flight is a highly complicated endeavor. A flying object can change position in six distinct directions—forward/backward ("surge"), up/down ("heave"), left/right ("sway"), and by rotating front-to-back ("pitch"), side-to-side ("roll"), and horizontally ("yaw"). "At every moment in time there are 12 distinct numbers needed to describe where the system it is and how quickly it is moving, on top of simultaneously tracking other objects in the space that could get in your way," says Majumdar. "Most techniques typically can't handle this sort of complexity in real-time." One common motion-planning approach is to sample the whole space through algorithms like the "rapidly-exploring random tree." Although often effective, sampling-based approaches are generally less efficient and have trouble navigating small gaps between obstacles. Landry's team opted to use Deits' new free-space-based technique, which he calls the "Iterative Regional Inflation by semidefinite programming" algorithm (IRIS). They then coupled IRIS with a "mixed-integer semidefinite program" (MISDP) that assigns specific flight movements to each "space-free region" and then executes the full plan. To sense its surroundings, the drone used motion-capture optical sensors and an on-board inertial measurement unit (IMU) that help estimate the precise positioning of obstacles. "I'm most impressed by the team's ingenious technique of combining on- and off-board sensors to determine the drone's location," says Jingjin Yu, an assistant professor of computer science at Rutgers University. "This is key to the system's ability to create unique routes for each set of obstacles." In its current form, MISDP has been optimized such that it can't do real-time planning; it takes an average of 10 minutes to create a route for the obstacle course. But Landry says that making certain sacrifices would let them generate plans much more quickly. "For example, you could define 'free-space regions' more broadly as links between areas where two or more free-space regions overlap," says Landry. "That would let you solve for a general motion-plan through those links, and then fill in the details with specific paths inside of the chosen regions. Currently we solve both problems at the same time to lower energy consumption, but if we wanted to run plans faster that would be a good option." Majumdar's software, meanwhile, generates more conservative plans, but can do so in real-time. He first developed a library of 40 to 50 trajectories that are each given an outer bound that the drone is guaranteed to remain within. These bounds can be visualized as "funnels" that the planning algorithm chooses between to stitch together a sequence of steps that allow the drone to plan its flying on the fly. A flexible approach like this comes with a high level of guarantees that the software will work, even in the face of uncertainties with both the surroundings and the hardware itself. The algorithm can easily be extended to drones of different sizes and payloads, as well as ground vehicles and walking robots. As for the environment, imagine the drone choosing between making a forceful roll maneuver that will avoid a tree by a large margin, versus flying straight and avoiding a tree by a small amount. "A traditional approach might prefer the first since avoiding obstacles by a significant amount seems 'safer,'" Majumdar says. "But a move like that actually may be riskier because it's more susceptible to wind gusts. Our method makes these decisions in real-time, which is critical if we want drones to move out of the labs and operate in real-world scenarios." CSAIL researchers have been working on this problem for many years. Professor Nick Roy has been honing algorithms for drones to develop maps and avoid objects in real-time; in November a team led by PhD student Andrew Barry published a video demonstrating algorithms that allow a drone to dart between trees at speeds of 30 miles per hour. While these two drones cannot travel quite as fast as Barry's, their maneuvers are generally more complex, meaning that they can navigate in smaller, denser environments. "Enabling dynamic flight of small, off-the-shelf quadcopters is a marvelous achievement, and one that has many potential applications," Yu says. "With additional development, I can imagine these machines being used as probes in hard-to-reach places, from exploring caves to doing search-and-rescue in collapsed buildings." Landry, who now works for 3D Robotics in California, is hopeful that other academics will build on and refine the researchers' work, which is all open-source and available on github. "A big challenge for industry is determining which technologies are actually mature enough to use in real products," Landry says. "The best way to do that is to conduct experiments that focus on all of the corner cases and can demonstrate that algorithms like these will actually work 99.999 percent of the time." More information: Aggressive Quadrotor Flight through Cluttered Environments Using Mixed Integer Programming. groups.csail.mit.edu/robotics-center/public_papers/Landry15b.pdf ICRA expects wind energy capacity addition during the current fiscal year to grow 20% over the last year to about 2800 MW and will be driven both by the IPP and non-IPP segments. In the rating agency’s view, the demand drivers for the wind energy sector remain favourable in the long run. This is mainly aided by strong policy support in place at the Centre and in key states which have wind potential, favourable regulatory framework in the form of renewable purchase obligation (RPO) regulations, as well as the cost competitiveness of wind-based energy vis-à-vis conventional energy sources. The National Institute of Wind Energy (NIWE), Chennai, India has launched two online maps, one each for wind and solar radiation. The Wind Energy Resources Map of India has been launched at 100 meter above the ground, while the solar radiation map has been set up at ground level on the online Geographic Information System platform.
<urn:uuid:c7f1c4db-f929-4858-b6d0-b123555e91b5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/icra-382729/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953496
3,131
2.671875
3
You have to hand it to NASA's headline writer on this video: "Challenges of Getting to Mars: Curiosity's Seven Minutes of Terror" is certainly clickable just based on that description. In this video that has become popular on the Internet over the weekend, NASA scientists discuss "the Challenges of Curiosity's final minutes to landing on the surface of Mars." Sure, the "DUN DUN DUN DUNNNNN" music is a bit much, but you have to respect the amount of scientific know-how required to make this a reality. We'll all find out on Aug. 5, when the Curiosity is scheduled to land on Mars. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Now watch: LEGO Lord of the Rings game trailer hits the Web Japanese develop real-time facial movement in avatars via webcam How The Avengers Should Have Ended Did this 1985 film coin the phrase 'information superhighway' and predict Siri?
<urn:uuid:50b0ab01-7ff8-48c7-9521-4a27d42bf029>
CC-MAIN-2017-04
http://www.itworld.com/article/2722702/virtualization/landing-on-mars--7-minutes-of-terror.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902465
236
2.625
3
Sometimes referred to as FinFET because the conductive channel is wrapped by a thin silicon "fin," the parts increase the surface area for electron flow without impacting size or power efficiency. Check out this Intel video for a corny yet succinct layman's explanation of how Tri-Gate works compared with planar gates. |A comparable image from Intel's previous-generation 32nm process shows traditional 2-D planar channel.| Also significant, Ivy Bridge is the first Intel processor series with direct support for DirectX 11, the latest version of Microsoft's graphics and multimedia APIs. On this score, Intel archrival AMD is no longer the only game in town. And as Intel promised, Ivy Bridge processors use the same sockets as Sandy Bridge parts. When it announced last year its ability to mass produce the parts using a 22nm process, Intel estimated performance increases of as much as 37 percent compared with 32nm planar-transistor devices and power consumption at about half or less. Such parts would be highly desired for small handheld units such as smartphones and tablets, medical devices, media players, and portable gaming systems. In fact, anything that can benefit from the ability to switch between high performance and low power consumption would benefit. These days, that includes just about everything.
<urn:uuid:f28da18a-3eb6-40e3-a6d3-c0843f010344>
CC-MAIN-2017-04
http://www.crn.com/news/components-peripherals/232900982/intel-3-d-tri-gate-transistor-holds-promise-for-lighter-thinner-systems.htm/pgno/0/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94494
258
2.609375
3
Jha D.K.,Earth System Science Organization ESSO National Institute of Ocean Technology ESSO NIOT | Jha D.K.,Andaman and Nicobar Center for Ocean Science and Technology | Devi M.P.,Bharathidasan University | Vidyalakshmi R.,Bharathidasan University | And 3 more authors. Marine Pollution Bulletin | Year: 2015 Seawater samples at 54 stations in the year 2011-2012 from Chidiyatappu, Port Blair, Rangat and Aerial Bays of Andaman Sea, have been investigated in the present study. Datasets obtained have been converted into simple maps using coastal water quality index (CWQI) and Geographical Information System (GIS) based overlay mapping technique to demarcate healthy and polluted areas. Analysis of multiple parameters revealed poor water quality in Port Blair and Rangat Bays. The anthropogenic activities may be the likely cause for poor water quality. Whereas, good water quality was witnessed at Chidiyatappu Bay. Higher CWQI scores were perceived in the open sea. However, less exploitation of coastal resources owing to minimal anthropogenic activity indicated good water quality index at Chidiyatappu Bay. This study is an attempt to integrate CWQI and GIS based mapping technique to derive a reliable, simple and useful output for water quality monitoring in coastal environment. © 2015 Elsevier Ltd. Source
<urn:uuid:dbc2f60d-cb0d-4726-85b2-68528d7894e2>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/earth-system-science-organization-esso-national-institute-of-ocean-technology-niot-946632/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906647
303
2.53125
3
The latest video news report from DigInfo News discusses a disaster-response robot that will help inspect the basements of the damaged Fukushima nuclear reactor buildings, which apparently have some small leaks in the underground areas. According to this video, the Sakura robot needs to climb down stairs, use a camera to spot the leaks, and maybe listen for leaking water (in case the camera can't spot them), as well as carry equipment down there to fix the problems. Oh, yeah, and make sure that the robot itself is protected from radiation that is sure to be down there. More photos and discussion of the robot's mission are here. Be brave, little robot! Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Science Monday #1: Why it's dark at night BBC gives Doctor Who fans an Amy/Rory postscript The best remote-control car chase ever Science Monday: Origins of Quantum Mechanics in under 5 minutes Motion-copy robot can mimic painting brush strokes
<urn:uuid:a709d1b8-e3e1-4aae-994b-2c458db9d7c4>
CC-MAIN-2017-04
http://www.itworld.com/article/2719377/it-management/stair-climbing-robot-prepares-for-nuclear-reactor-inspection-duties.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927295
246
2.546875
3
Kaspersky Lab announces the publication of the analytical article ‘The Dangers of Social Networking’, by Georg Wicherski, a malware analyst with the Company. The article analyzes a wide range of IT threats, from the less dangerous types such as run-of-the-mill spam, to the more technically sophisticated drive-by infections. If you click on untrusted links or use easily-guessable passwords to protect your personal data, then you are not only endangering yourself, but also the people around you, most notably the friends that you communicate with on social networking sites. Having gained access to your account, an attacker can impersonate you and send your friends messages that appear to originate from you. In order to save your valuable data, your money and your network of trust, you should not only follow some basic rules yourself but also raise your friends’ awareness too! The full version of the article ‘The Dangers of Social Networking’ can be found at Securelist.com/en. A summary of the article can be found here. This material can be reproduced provided that the author, the Company name and the original source are cited. Reproduction of this material in rewritten form requires the express consent of the Kaspersky Lab PR department.
<urn:uuid:26f471a8-1317-4469-b07b-bb50b8813aa7>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2010/Social_networking_security_depends_on_your_friends
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910069
268
2.765625
3
Security Spotlight: Blocking or Filtering Spam Other than a surprisingly popular potted meat product from the Hormel company, spam also refers to unwanted or unsolicited e-mail. Although the origins of this term in the computing world are somewhat mysterious, most authorities agree that the famous Monty Python skit that features ceaseless repetition of the term helped contribute to its use for e-mail as well. All this said, Brightmail (a leading anti-spam technology vendor and sevice provider) has somewhat gloomily predicted that by the end of this year, over half of all e-mail traveling over the Internet will be spam. Why mention spam in a security newsletter? Besides its unfortunate impact on Internet bandwidth and its tendencies to clog inboxes everywhere, some spam also includes malicious attachments as well. Since the mid-1990s, in fact, e-mail has been the leading source of infection for malicious software of all kinds. By definition, since infections are neither wanted nor welcomed, any infected e-mail message is spam. A recent spate of products to help filter or block spam have emerged. In this context, the term filter means “to remove potential spam from an inbox” and block means “to prevent potential spam from being forwarded.” Blocking works best at e-mail servers to prevent spam from traveling the Internet; filtering works best at e-mail clients (and servers) to remove spam from requiring a human’s attention as he or she reads e-mail. Because no perfect blocking technology has yet to be devised, some combination of blocking and filtering is usually required to eliminate as much spam as possible. On the server side, some companies offer e-mail screening services that are often advertised as anti-spam services. Also, other companies offer outright server-side anti-spam software, and still others offer client-side anti-spam software as well. Some combination of items from the first two of these three categories probably touches the majority of e-mail that arrives in users’ inboxes nowadays, and an increasing number of users are employing anti-spam technology at the desktop as well (some of these implementations involve regular updates like those used with anti-virus software, and may be called anti-spam services as well). Table 1 shows a sampling of players in all 3 categories. Table 1: Anti-spam software and services
<urn:uuid:bfee586d-69c2-4cdc-b8b3-8a6778971397>
CC-MAIN-2017-04
http://certmag.com/security-spotlight-blocking-or-filtering-spam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952868
497
2.828125
3
You can use lists to organize different types of data and display the data in your apps. Users can select items in a list and your app can respond to the selection in any way you want. Your app might display information that's related to the selected item, or it might display a new screen. Lists in the Cascades framework use a model-view-controller design pattern. This design pattern separates the data for the list (the model) and the visual representation of the list (the view). By using this approach, you can provide data from different sources without changing the graphical appearance of the list. Similarly, you can change how each list item looks without changing where the data comes from. You create a list by using two components: a list view and a data model. - A list view determines how data is displayed in the list. You can specify visual properties for the list, such as width, height, and margins. You can also specify the appearance of each list item by using QML controls. For example, for a list of text options, the list items might be Label controls. For a list of images, the list items might be ImageView controls. You can also create custom QML components to represent your list items. - A data model provides the data for the list view to display. You can use data from various sources, such as a JSON data structure, a SQLite database, or an .xml file. When the list view needs to access the data, it queries the data model and receives the appropriate information to display. You can use predefined data models to handle your data, or you can create your own data model. In Cascades, list views are represented by the ListView class in C++ and the corresponding ListView component in QML. Data models are represented by the DataModel class and its subclasses. You can learn more about list views and data models by visiting the links below. These links focus on how to use ListView in QML. See Creating a list for information about creating a list in C++. To create your list in QML and then populate the data model for the list in C++, see GroupDataModel example using QML and C++. Last modified: 2015-05-07
<urn:uuid:6f489623-2751-4609-9286-982c7e1360e9>
CC-MAIN-2017-04
http://developer.blackberry.com/native/documentation/ui/lists/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00025-ip-10-171-10-70.ec2.internal.warc.gz
en
0.83789
467
3
3
Stanford University researchers "have created a tiny wireless chip, driven by magnetic currents, that's small enough to travel inside the human body. They hope it will someday be used for a wide range of biomedical applications, from delivering drugs to cleaning arteries." It's not as cool as miniaturizing a submarine filled with doctors (or Martin Short, in Innerspace), but it's still an advance in science that must have been inspired by science-fiction stories like Fantastic Voyage and the like. It also reminded me of the Body Wars ride at EPCOT Center, which used a motion-control machine to simulate a ride through the body. The ride was in the now-closed Wonders of Life Pavilion, and included performances by Tim Matheson and Elizabeth Shue. Here's a video tribute to the ride, for those who remember: Read more of Keith Shaw's ITworld.TV blog and follow the latest IT news at ITworld. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:39fa08e5-c4a8-4a09-b15d-4997ee07a8cb>
CC-MAIN-2017-04
http://www.itworld.com/article/2729911/networking/sci-fi-reality--tiny-chip-can-enter-bloodstream--fix-you-up.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941288
228
2.671875
3
Primer: Geospatial AnalysisBy Kevin Fogarty | Posted 2004-08-04 Email Print Mapping data yields more than just good directions; it can yield customers. A look at the benefits of geospatial analysis. What is it? A way to determine where your customers live or work by correlating their street addresses with their physical location. It's done by adding data from mapping software or the Global Positioning System to the customer information you already have, such as purchasing history, creditworthiness and income. By combining demographic and geographic information, marketers can, for example, draw a line around a particular region and ask the database for the names and income ranges of customers who live within that area. Existing databases can track street addresses and ZIP codes, but can't usually tell, say, whether East Main Street and West Main Street are next to each other or miles apart. Where's the benefit? By knowing a customer's physical location, you can gain tremendous insight into that person's needs, says Fred Limp, director of the Center for Advanced Spatial Technologies (CAST) at the University of Arkansas. If, for example, a customer at 123 Main St. just bought a riding mower, a neighbor at 321 Elm might, too. A normal database query would tell you the two addresses are in the same ZIP code, but not that they are around the corner from each other in a development with unusually large lawns. How would I use it? The most-cited example, according to Limp, is to help select retail locations by analyzing neighborhood demographics surrounding each potential spot. Without searchable location data, you'd have to rely on ZIP codes to identify the area to be examined. "But then, once you put in the data, you can also ask: How many customers make more than $100,000 and live within two miles of the store?" Limp says. Making those kinds of connections can also help a wholesaler identify where it's losing sales because there aren't enough distributors, or allow an insurance company to set rates according to the disaster risk for a particular house, not just a neighborhood, says Henry Morris, group vice president for applications and information access at IDC. Where do I get this information? Most detailed electronic maps are created by federal, state or local governments to maintain roads, bridges and other infrastructure, but they're available free to the public. The problem is, you have to piece it together. For a price, service bureaus will do the work for you, using interoperability standards created by the Open GIS Consortium, which represents vendors of geospatial-analysis products. What's the downside? Not all entities want to give up their information. Utilities, for example, build detailed maps that include customer locations, pipes and underground lines. They share some information to help keep backhoes from digging in the wrong places, but are reluctant to share details such as the locations of vulnerable central switching stations or other critical elements, according to Bob Samborski, executive director of the Geospatial Information & Technology Association. Issues like this limit the detail and effectiveness of geographic data. Various government agencies are negotiating with private-sector companies, Samborski says, but no wide-ranging agreement has been reached.
<urn:uuid:6bf0c25d-5ffb-4e0c-bd8e-da80fe06eec4>
CC-MAIN-2017-04
http://www.baselinemag.com/it-management/Primer-Geospatial-Analysis
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941537
667
2.625
3
Webb S.L.,Clavelen Wing Associates LLC | Webb S.L.,Samuel Roberts Noble Foundation | Dzialak M.R.,Clavelen Wing Associates LLC | Houchen D.,Clavelen Wing Associates LLC | And 2 more authors. Western North American Naturalist | Year: 2013 Development for wind energy is increasing rapidly across the United States, particularly in Wyoming, despite a general lack of information on the potential interaction development could have on wildlife species. Therefore, knowledge of the space use and movement patterns of individuals can help define spatial distributions and management unit boundaries for populations prior to development. Such knowledge can also be used as baseline data from which to assess any future impacts on animal populations. We investigated the spatial ecology of female mule deer (Odocoileus hemionus; n = 18) equipped with global positioning system collars from 23 February 2011 to 15 January 2012 in an area along the Wyoming-Colorado border that has been proposed for wind energy development. The objectives of this study were to collect predevelopment baseline estimates of annual and seasonal home-range and core area size and fidelity, movement between seasonal ranges, changes in the use of elevation, and movement patterns at 2 temporal resolutions (i.e., within-season diel patterns and year-round diurnal and nocturnal movements by week). Annual size of home ranges averaged 2495 ha (SE = 121), whereas size of core areas averaged 310 ha (SE = 30). Seasonal site fidelity was substantial (81.1%, SE = 5.7) between successive cool-season ranges. Migration distances between cool-and warm-season home ranges were minimal (spring migration = 1319 m; autumn migration = 1342 m). Deer exhibited crepuscular movement patterns (peaks near 06:00 and 18:00) during the warm season but showed a diurnal movement pattern during the cool season (peak from 06:00 to 15:00). Partuition influenced movement during the warm season; movement was much reduced during a period from mid-June to mid-July Deer in this population appear to be year-round residents that exhibit strong seasonal and annual fidelity to previously established ranges and modify movement patterns in relation to general changes in environmental conditions (e.g., snow). These findings can be used to define seasonally important ranges and formulate boundaries and sizes of game management units. Understanding fine-scale temporal movement allows the development of strategies that could minimize disturbance to deer while allowing for development or recreation. © 2013. Source
<urn:uuid:4bd3fea9-d06b-4564-b48e-41b308899eeb>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/clavelen-wing-associates-llc-888106/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00550-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938311
521
2.65625
3
BY ROB ENDERLE Predicting social change is a crap shoot. There are so many variables to consider. And, as powerful as the Internet may be, the real world that exists outside the virtual one is so far beyond our control that its impact is virtually unpredictable. Nonetheless, there are some clear trends that will affect society greatly, and by focusing on them, we can glimpse where we are going—and hopefully control the technology rather than allowing it to control us. IT’S TIME FOR YOUR CLOSE-UP We are still at the early stages of digitizing, imaging, and monitoring the real world. But by 2030, video scrutiny will be far more pervasive than today, and more heavily populated areas will be under constant video surveillance. Being on camera virtually all the time, and being able to access images of most locations and activities, will not only change how we feel about personal security and privacy, but will also cause us to censor how we behave in public places. It isn’t yet clear if we will feel less or more secure, but the reaction to Google Earth in Europe reminds us that technology is a two-edged sword: Many people see the application as a tool that helps burglars case a location before striking. Others point out that live feeds could, in theory, make it easier to catch thieves and other criminals. In the future, more people will be able to monitor what you do, and there will be a more detailed and permanent record of it, creating profound privacy concerns. One can imagine creating a video diary of a specific time interval in one’s life or perusing a video database to see where a prospective employee, a suspected criminal, or a rebellious teenager goes and what they do during the day. A “VIRTUAL YOU” THAT COULD LIVE FOREVER The concept of creating a virtual representation of oneself has already begun at sites, like Lifenaut, that let users create a realistic, 3D avatar: a “virtual you” that you can teach to talk and behave like you, using online tools. In 20 years, this technology may make it possible to “be in two places (or more) at once.” For example, an avatar might handle routine e-mail, monitor news and social networking feeds, and even chat with people when you’re unavailable. And, of course, a virtual person could outlive the real one, perhaps offering some comfort, or affliction, to those still alive. Advancements in data mining are creating tools like MIT’s Persona, which can quickly compile how you are viewed and spoken about on the Web. Down the road, one can imagine a job interview that is conducted with your virtual self—not one that you create, but one that is based on the information, accurate or not, that is available about you. The Web could become a highly accurate lie detector or a totalitarian nightmare. It’s easy to imagine your avatar testifying against you based on what past behavior suggests you would likely do. YOU AIN’T SEEN NOTHIN’ YET In 20 years, the ability to separate what is real from what is imagined will be increasingly difficult, but the difference may become irrelevant as the real and virtual worlds blend—most likely to our benefit and our detriment. And that’s only the beginning of the impact of the Internet. The next hundred years? Now that’s where the really big changes will occur!
<urn:uuid:3663ef9c-75a8-4a26-8e0e-63797957207a>
CC-MAIN-2017-04
https://www.emc.com/leadership/articles/immortality-unreality.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940194
721
2.875
3
Not since the British burned the Library of Congress to the ground in the War of 1812 has there been a more devastating attack on the famous library. Only this time, the recent attack was of the digital variety and King George III had nothing to do with it. The attack was launched on July 17th by a hacking group calling themselves the Turk Hack Team. The group used a DDoS attack to shut down the Library of Congress website and hosted systems, including Congress.gov, the Copyright Office, Congressional Research Service, and other sites. What makes this attack so sobering is that it could have been prevented if the Library's IT systems were properly managed and updated. This revelation caused a shakeup of the Library’s leadership, along with a call from government officials for an overhaul of the Library’s outdated IT systems. Until these updates are completed, those who rely on the Library of Congress to gather crucial information may find themselves stuck with an inefficient system. Looking all the way back to 2002, the Library of Congress has a reputation for the chronic mismanagement of its IT systems, which includes the mishandling of contractors and the miscalculation of IT budgets. Much of the blame lies with the library's leadership, a head librarian of 28 years who showed patterns of resisting the latest IT solutions. The librarian’s anti-technology attitude was even seen on a personal level as they refused to use email. This mismanagement of the library's IT is no secret around Washington DC. In a 2015 report, the Government Accountability Office (GAO) criticized the library’s infrastructure and demanded that they hire permanent employees to oversee their IT systems, which comes with a budget of $120 million. To give you a window into the library's mismanagement, consider the fact that, in the library’s report filed to the GAO, they claimed to have had less than 6,500 computers in their possession, yet the GAO found the actual number to be closer to 18,000. In another telling example of the Library’s technology woes, it was found that another government department overseen by the Library of Congress, the Copyright Office, still has many of its important records card-catalogued. While the library’s paper-based card catalogue may be safe from foreign hackers, it’s certainly an inefficient way to run a major institution. Businesses that don’t prioritize in updating and maintaining their IT infrastructure can learn a lot from this major data breach. Hackers are first and foremost looking for organizations with outdated IT systems. Companies that fit this bill are considered easy targets, or “low-hanging fruit.” Alternatively, businesses that implement current IT solutions, update their systems, and make network security a priority will be passed over by hackers like yesterday’s jam. To get this kind of protection and oversight for your business, call Nerds That Care today at 631-648-0026.
<urn:uuid:02c198ae-03e5-4bd4-989f-87befc3c02e1>
CC-MAIN-2017-04
https://nerdsthatcare.com/nerd-alerts/entry/shhhh-library-of-congress-hacked
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956194
608
2.65625
3
Fiber optic communication network has become the cornerstone of today’s world of information transfer. With the further development of the network and increasing market demand for communication bandwidth, the entire communication network at the part between the user’s last ten kilometers and last one kilometer are now using optical fiber. FTTH becomes an important direction of the development of fiber optic communication network. FTTH mainly uses PON network technology, which requires a large number of low-cost optical splitters and other optical passive components. Optical splitter device is an integral part of FTTH network. And with the promotion of FTTH, there would be a greater market demand. The traditional preparation of optical splitter technology is fiber fused biconical taper (FBT) technology. Its characteristics are mature and simple technology. The disadvantage is that the assigned ones and the device size are too large, which caused the decrease in yield and the rising cost of single channel. Therefore, FBT technology based fiber optic splitter preparation techniques have been unable to adapt to the market demand. The following picture shows the FBT technology process. As you can see from the perspective of development of optical devices, PLC technology has become a mainstream technology for large-scale preparation of high-performance and low-cost optical splitter. It is the use of PLC technology by producing the optical splitter chip coupled with the optical fiber array package that completes the preparation of the optical splitter. Its features are small device size, low cost and good splitter uniformity. But at the same time, the technical threshold is relatively high, especially for the production of large optical splitters which are suitable for mass production. By analyzing of PLC technology, you can see that the glass-based PLC technology has great advantages in terms of equipment investment, production costs, the optimal choice of production required for FTTH and low-cost optical devices such as optical splitter. Internationally, PLC technology has been widely used in the miniaturization, high-performance optical device fabrication and production, and the optical splitter chip in particular. In China, however, the reality is that we have become a PLC encapsulation big country, but is limited to the optical splitter and optical device fabrication device coupled packaging and downstream industry chain having no PLC chip health line. And PLC core device chips are entirely dependent on imports. There is a problem that the core preparation technique of the PLC device lies in the outward, which has resulted in major cost control of the device and also led to the lack of technical support further to the high-end integrated chip development. This severely hinders the development of our country in the PLC application. PLC splitter chip manufacturing process PECVD (plasma enhanced chemical vapor deposition) and FHD (flame hydrolysis deposition) and ion exchange. The former two with the substrate material is a silicon-based silica, and the latter with the substrate material is glass. AWG (arrayed waveguide grating) chip production, silica optical waveguide splitter chips can be produced on silica on silicon waveguide or glass waveguide. Production by ion exchange glass waveguide PLC chip domestic number of colleges and universities have been conducting research and development. Technical appraisal sample has been reached the international advanced level of similar products. Further pilot research and development will be able to achieve your PLC splitter chip mass production. In fact, in addition to the production of optical splitter chip, glass-based PLC technology R & D production environment with a wide range of other potential applications, for example, can be applied to detect the required light sensor. Nowadays, many kinds of PLC splitters with optical splitter chips are on the market for different network connections. Bare fiber PLC splitter has no connector at the bare fiber ends. Blockless PLC splitter has a compact stainless tube package providing stronger fiber protection with terminated ends. ABS PLC splitter has a plastic ABS box to protect the PLC splitter. Other types like fanout splitter, tray type splitter, rack-mount splitter, LGX splitter and mini plug-in type splitter are also widely applied to the FTTH networks. PLC splitter chip technology has greatly improved the implementation of FTTH network. FS.COM provides all kinds of PLC splitters listed above. If you are interested, please search our website for more detailed information. Related article :
<urn:uuid:a4ba997f-d145-4d21-b6bf-4b2eb0948e11>
CC-MAIN-2017-04
http://www.fs.com/blog/industrialization-of-plc-splitter-chip-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928987
913
3.234375
3
“If you believe some of the reports, secure connections as we know are worthless”, he says, adding that, whilst this issue may seem quite complex for most users, the bottom line is that the security of encryption systems rely on two features: that the attacker does not know the key used to encrypt the message and the attacker does not know the nature of message being sent. “However, if the attacker can trick your browser into sending some known plain-text over the target Secure Sockets Layer (SSL) connection and they can also capture a copy of that message in transit, then the possibility arises of decoding other plain-text within the same message. While having a copy of a known message encrypted is not as good as having that key, it does give the attacker a good foothold making the cryptanalysis of the message much easier”, he explains in his latest security posting. “Now that the attacker now has the capability, with some effort, to decode parts of the of messages sent by the user to the secure server. It should be noted that the this attack only works on one direction at a time. Using this method it is possible to decode portions of other plain-text in the same message as our injected text”, he adds. April goes on to say that the Beast toolkit released by the Far Eastern researchers earlier this month uses this capability to extract session cookies that can be used to hijack the user session. And here's where it gets interesting, as the Trend Micro senior threat researcher says that security experts have known for years that TLS/SSL is potentially vulnerable to this kind of attack. “Simply put, the Beast toolkit did not reveal anything we don’t already know. What it did was to package this attack into an easy to use form that vastly reduces the resources and skills required to execute it”, he explained. April notes that there has been a lot of talk about this being a man-in-the-middle attack, but it can just as easily be executed with browser and local network access. Depending on network configurations, he argues that the sniffer could reside on the target host or an adjacent host. “There is a great deal of infrastructure flexibility possible here”, he says. So what can users do about this problem? April says that users should keep time spent on sensitive SSL sessions as short as possible, as the attacker needs time to decode the encrypted message. If the session cookie is invalid before the attacker has finished, he asserts, this attack fails. “When leaving an SSL protected site, be sure to actually log out, not just move to a new site. In many cases, actively logging out will invalidate any cookie/session data that the attacker may have successfully decoded”, he says. April concludes that standard security best practices still work - for this attack to be successful, he says that the attacker must have access to either your network or your computer. At the very least, he adds, up-to-date security software will make life harder for an attacker.
<urn:uuid:2d3f9e36-7ae6-4ed3-8b28-eb85f56bbbb0>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/is-secure-sockets-layer-broken/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958511
636
2.609375
3
An international nuclear nonproliferation organization said on Tuesday it has identified what appear to be radioactive materials released more than two months ago by North Korea’s third nuclear test. Sensors in Japan and Russia detected the xenon 131m and xenon 133 earlier this month, according to the Preparatory Commission for the Comprehensive Test Ban Treaty Organization. “The ratio of the detected xenon isotopes is consistent with a nuclear fission event occurring more than 50 days before the detection (nuclear fission can occur in both nuclear explosions and nuclear energy production),” the Vienna, Austria-based body said in a press release. “This coincides very well with announced nuclear test by the D.P.R.K. that occurred on 12 February 2013, 55 days before the measurement.” There has been little question that North Korea detonated a nuclear device within an underground chamber at its Punggye-ri installation. State media declared the event, which was instantaneously detected by 94 CTBT seismic sensor sites and two infrasound stations. Detection of radioactive material was seen as offering definitive proof of the blast. Issue experts had theorized that the absence of such a find suggested the North intentionally sealed the test chamber to prevent material escapes. The North has now been determined to be a probable origin site for the radioactive noble gases via atmospheric modeling of how weather patterns would move the material, the agency noted. It said, though, that the conclusion provided no assistance in determining whether Pyongyang detonated a plutonium-based device as in its 2006 and 2009 tests, or if instead it used highly enriched uranium for the first time. “To be able to distinguish between uranium and plutonium, it helps if a detection is made early (before the decay of isotopes) and the amount of registered radioactivity is large,” according to a CTBTO fact sheet. "At this stage it is very unlikely that remote sensing is going to provide any clues as to what material the test involved," said Daryl Kimball, executive director of the Washington-based Arms Control Association. Definitive word would probably have to come from North Korea, added Jeffrey Lewis, director of the East Asia Nonproliferation Program at the James Martin Center for Nonproliferation Studies. "Either they tell us, or ... a little bird overhears them talking about it," he stated by e-mail. The preparatory commission is charged with fielding and operating hundreds of detection facilities that would use four different sensor technologies to catch breaches of the Comprehensive Test Ban Treaty, which has yet to enter into force. North Korea has not joined the pact and continues to push ahead with its nuclear arms program in the face of overwhelming global opposition. It is the only nation in the last 15 years to conduct explosive atomic trials. "The latest detection shows again that the CTBT verification regime is very sophisticated and stands ready to provide confidence to states that no nuclear explosion will escape detection," CTBTO spokeswoman Annika Thunborg stated by e-mail. Thunborg in March said CTBTO officials did not expect to find any radioactive remnants one month after the test. “Detection of radioactive noble gas more than seven weeks after the event is indeed unusual, we did not expect this and it did not happen” in North Korea’s previous nuclear test more than three years ago, according to the Tuesday CTBTO statement. Primary detection of xenon occurred on April 8 and 9 in Takasaki, Japan, 620 miles from Punggye-ri, followed by a lower-level identification from April 12 to 14 at Ussuriysk, Russia. Further analysis was necessary before the findings could be announced, the organization said. Findings suggest an “instantaneous” emission of 1 to 10 percent of the noble gases that would have been left from February, the agency said. It declined to speculate on the cause of the release. "Sometimes the geology of the test site means that the radiological gases that are produced to not escape to the surface for some time," Kimball said in a telephone interview. It remains possible that the materials did not originate in North Korea, the treaty organization acknowledged. However, it ruled out Japan’s earthquake-crippled Fukushima Daiichi nuclear plant as the source and played down the potential for spoofs. While it would be feasible for the North to release the radioactive gases without setting off a nuclear device, producing the 4.9-magnitude earthquake that occurred on Feb. 12 with standard explosives would be “technically very challenging” and hard to pull off without being caught, the agency said.
<urn:uuid:f81c5267-0f9e-401d-a322-2764059ffeaf>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2013/04/possible-north-korea-nuke-test-emissions-identified/62734/?oref=ng-skybox
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953321
959
3.015625
3
Time and again we've seen that the username-password model of security isn't very secure. In its latest expose, Ars Technica showed how 90 percent of passwords are quickly made mincemeat. But until we have electronic tattoos and password pills, we're stuck with the same-old password problem. How do you make your password more secure? The easiest answer is to just make it longer. While 6-character passwords containing mixed characters and numbers might once have been considered secure, crackers these days can guess them in minutes using brute force, thanks to improved technology. According to the Ars article: Gosney's first stage cracked 10,233 hashes, or 62 percent of the leaked list, in just 16 minutes. It started with a brute-force crack for all passwords containing one to six characters, meaning his computer tried every possible combination starting with "a" and ending with "//////." Because guesses have a maximum length of six and are comprised of 95 characters—that's 26 lower-case letters, 26 upper-case letters, 10 digits, and 33 symbols—there are a manageable number of total guesses. This is calculated by adding the sum of 956 + 955 + 954 + 953 + 952 + 95. It took him just two minutes and 32 seconds to complete the round, and it yielded the first 1,316 plains of the exercise. Longer passwords of seven to eight characters takes more time, but not that mich more time. Mere seconds, in fact. The good news is that, as of current technology, passwords 11 or more characters long are exponentially harder to crack by brute force. So Ars' recommendation to readers is to: "make sure their passwords are a minimum of 11 characters, contain upper- and lower-case letters, numbers, and letters, and aren't part of a pattern." To make your password even stronger: make it as long as possible (at least 11 characters), truly random, and avoid dictionary phrases too. Yes, that even includes "correct horse battery staple." A password manager, such as LastPass, 1Password, KeePass, and Apple's new built-into-Safari password manager can help generate a long and strong password for you and save it for future reference, so you don't have to keep doing that whole password reset dance and can keep hackers out of your important accounts. Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:f85974b4-ee42-40e1-a3fd-59923ea4a2b8>
CC-MAIN-2017-04
http://www.itworld.com/article/2711565/consumerization/how-long-your-password-needs-to-be-to-really-thwart-hackers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938361
545
2.59375
3
As teachers increasingly focus on video streaming and other advanced technological methods in the classroom, the need for network bandwidth in schools is rising. This requirement closely mirrors what is going on in the enterprise, where video is used to engage workers and improve meeting quality, creating new bandwidth demands. Dealing with video in education According to a recent Education Week report, the bandwidth problem in education is escalating, despite efforts made to alleviate connectivity issues. For the past few years, the Federal Communications Commission has been working to implement better network infrastructure throughout the country, providing the core cabling infrastructure needed to enable schools and similar public sector facilities to access better resources from a telecom perspective. While these efforts are admirable, the news source said that studies indicate that more needs to be done. Last spring, the State Educational Technology Directors Association (SETDA) completed research that indicated that bandwidth increases for schools will not happen linearly, but exponentially. However, the Common Core State Standards initiative analyzed technology trends in schools around the country, finding that next-generation technological deployments in education that promote digital learning tools could push bandwidth needs beyond the exponential growth estimates made by the SETDA. What this means for businesses Many teachers are finding that video, especially advanced uses of the technology, can add immersion to the classroom and engage students. Similarly, executives are realizing that they can use enterprise video to improve meeting quality and engage the workforce. In businesses, employee-generated content also offers major potential. However, companies cannot necessarily rely on the FCC to make strategic investments in fiber backhaul to support private companies. Instead, CIOs have to consider ways to adapt their networks for the requirements created by video. This process of making the enterprise network work with video has two distinct phases. The first is working with telecom services providers to identify bandwidth capabilities, what upgrades are possible and analyzing how video impacts the network. When that is accomplished, turning to a solutions provider for video-specific delivery solutions, such as an enterprise content delivery network, can provide vital support for video, alleviating many bandwidth concerns. In many cases, such a solution provides more relief than implementing bandwidth upgrades. Video can easily use up so much data throughput that a simple capacity improvement does not get the job done.
<urn:uuid:b6de102d-6e40-49ec-8cdd-e88baebd87d8>
CC-MAIN-2017-04
http://www.kontiki.com/resources/education-sector-and-businesses-face-similar-issues-in-video-trends/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948174
451
2.59375
3
: Redundancy Matters"> Lesson 2: Redundancy MattersVerizon had installed redundant connections between its central offices so a cable failure on one route wouldnt disrupt communications. Although the redundancy did not allow Verizon to immediately restore service, technicians ran aboveground cables to an undamaged portion of a redundant line that connected 140 West to a central office on Canal Street, several blocks north of Ground Zero. Without the redundancy, Verizon would have taken weeks to dig up streets and connect 140 West to the Canal Street facility—assuming equipment was available to dig the trenches. After Sept. 11, Verizon put 18 optical communications rings into place in Lower Manhattan. With these rings, which use Synchronous Optical Network (SONET) technology, a failure between two points can be overcome simply by reversing the direction of traffic. Even with the SONET rings, Verizons data network in Lower Manhattan is not fail-safe. A knockout of 140 West will still cause the network to fail. But now Verizons network will operate if damage or disaster disables part of a ring. Muscle power was aided immeasurably by one feature Verizon built into its network before Sept. 11.
<urn:uuid:d68bf94b-830e-4bf3-98f2-7ffa7c431276>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Messaging-and-Collaboration/Verizon-Reconnecting/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938657
240
3.015625
3
William E. Kennard was nominated by President Clinton in August 1997 to chair the Federal Communications Commission. Sworn in Nov. 7, 1997, his term expires June 30, 2001. Kennard, a Los Angeles native and Phi Beta Kappa Stanford grad, received his law degree from Yale in 1981, was a practicing attorney in a broad range of communications issues, and served 3 1/2 years as general counsel to the FCC before his confirmation. Just a year into his term and poised at the edge of the millennium, Kennard has not only already made his mark as the first African American to head the independent government agency, but is overseeing the most historic changes ever in the telecommunications cosmos, following the big bang of the 1996 Telecommunications Act, now being implemented by the FCC, with policies affecting interstate and international radio, television, wire, satellite and cable, and now, the Internet. Now, at the outset of the Information Revolution, communications and information industries represent the fastest growing sectors of the economy, and involved more than $800 billion in 1997. And on the state and local level, battles continue to rage in the wake of the 1996 Telecommunications Act, with those billions of dollars at stake. As various technologies converge, there are some that have gone so far as to call for the abolition of the FCC. Kennard rejects that notion, citing the need for FCC regulation to assure equality of access as a fundamental right of all Americans. And in the race for bandwidth, Kennard offers his vision for what could lie ahead in the very near future, if the telecom giants are willing to abandon some of their gold and join the FCC to give the telecosm a brave new whirl. Q: Until 1996, the telecosm has been governed largely by laws half a century old and telephone industry rules dating back to 1887. The Telecommunications Act of 1996 is perhaps the most important piece of economic legislation of the 20th century. Describe this historic legislation and some of its ramifications from your vantage point in the middle of it all. A: When Congress passed the Telecommunications Act of 1996, we all thought we were setting the stage for competition in the local market for plain old telephone service over the old public switched telephone network. The principle debate about the telecom act was about how to promote competition on the analog telephone network -- local and long distance. Cable, long-distance and local telephone companies all said that they were going to enter each other's businesses. Entry turned out to be harder and more costly than expected. But the rise of the Internet has changed business plans again. Companies can now compete to sell high-speed Internet access. At the same time, all communications products are becoming digital bits. Whether audio, video, voice or Internet data, they are all computer codes -- ones and zeros. This brings down the cost of competition. Internet and digital technology have the potential to renew the promise of the telecom act. Circuit switches are giving way to packet switches. Instead of keeping an entire circuit open and dedicated to a single conversation for the length of the phone call, packet switching breaks the spoken words into tiny data packets that are disassembled, then transmitted separately over the most efficient routes possible and then reassembled at the other end of the call in microseconds. The same technique can be used to handle other types of traffic, such as data, image, and even video. It is an amazing technological advance that greatly expands the capacity and functionality of the network. It's no coincidence that while the market for voice services is growing at around 5 percent annually, the packet-switched data business is growing at an annual clip of 300 percent. Amazing. Q: What does this mean for the average American sitting at home? With all the hype, today's Internet is often much too slow for maximum productivity. Should the public be excited about the potential for an explosion in data transmission and expansion of bandwidth capacity? A: You bet they should. When we can harness this new technology and put it to work in living rooms across the country, we will open up exciting new horizons for the American people -- new horizons for entertainment, information, and communications services for all Americans. It means that the same high-speed Internet access that many Americans enjoy in the workplace will be available at home. It means that the same copper wire that allows families to connect over the phone will permit them not just to talk to each other, but to see each other as well. So instead of gathering around a telephone to sing happy birthday to a relative on the other side of the country, the family will gather around a computer and see their relatives in realtime video coast to coast. The technology is here. We just need to get it to America's homes. It will mean having the ability to download a feature-length movie in a matter of minutes, and then watch it when you want to, rather than having to consult the TV guide or worry about late fees at the video rental store. The technology is here. We just need to get it to America's homes. It means that we'll be able to hop from Web page to Web page on the Net as quickly as we can change channels on the television with the remote. People will no longer have to take a break from their home computer while they wait for it to download data. This is what home computer users call the World Wide Wait on the World Wide Web. The technology is here. We just need to get it to America's homes. It also means opening a whole new world of electronic commerce -- doing business over the Internet. Expanding bandwidth to the home will make shopping from home easier than shopping from a catalogue, with even glossier photos. This type of home shopping is just the tip of the iceberg when it comes to e-commerce. The technology is here. We just need to get it to America's homes. A recent edition of Business Week had a column on e-commerce that's entitled "You Ain't Seen Nothin' Yet." That title hits the nail on the head. This year, revenues from e-commerce are expected to be around $20 billion. That number is expected to grow to $350 billion in four years. E-commerce is so much more efficient. It can cut retailing costs by up to 10 percent. That means more jobs and billions of dollars added to the nation's economic output. Q: So if the technology is here, why aren't Americans seeing these benefits in their homes? A: The problem is bandwidth to the home. Imagine trying to fill a backyard swimming pool with a garden hose. There's plenty of water in the city reservoir to fill the pool, and there are huge water mains that can deliver the water down your street. But when you get to the final link in the chain -- the garden hose -- suddenly the water starts flowing a lot slower, because the hose is too small compared to the amount of water you are trying to pump through it to fill the swimming pool. The hose -- the pipe -- is just too small. It's the same way with high-speed data transmission. The Internet backbone is a network of networks that has plenty of capacity to pump data all over the country very quickly. But when it reaches that last mile, the copper phone line that runs into your house is a lot like that garden hose. It can't handle the amount of data that needs to be pumped through it to fill up your computer screen quickly. Q: The World Wide Wait is well known and felt. Limited bandwidth is a major stop to excitement about the Internet ... A: But all that is changing. Last year, the pundits were saying that all the bandwidth in the world wouldn't help if the major entertainment companies didn't change their perceptions of the Web. Well, guess what? Entertainment companies are converging on the Internet and buying the Web directories that we rely on to surf the Net. They see the Web as another distribution channel for their entertainment programming. That's why, NBC and Disney [recently] bought Internet portals. We recognize that convergence is upon us, and so the FCC is working hard to promote deployment of high-speed transmission across all the media. Cable companies are using their cable lines and high-speed cable modems to deliver data to the home at lightning speed. The FCC has adopted new rules so that soon Americans won't have to rent their cable modems from the local cable operator, but will be able to buy a standard cable modem from a number of sources, just like you buy a computer modem or a telephone. We also are seeing changes in wireless technologies. We just issued the first set of high-capacity wireless licenses for local multipoint distribution services, or LMDS. We will auction more spectrum in the future that can be used for these types of fixed services, such as our upcoming 39GHz auction. And [soon] wireless cable operators will be able to offer high-speed data. And broadcast television, for the first time, will be able to use its huge amounts of bandwidth for one-way digital transmission, including data and Internet access, as well as stunning high resolution video and CD-quality audio. Now we are confronting another issue with serious implications for broadband delivery over cable and broadcast: must-carry for broadcasters' second digital channel over cable. And phone companies and others are investing in ways to transform the copper phone line to work similar wonders for the American consumer. Many companies are chomping at the bit to provide their services to residential customers. At the FCC, our job is to fire the starting gun and let the race begin. We should not micromanage the race. We simply need to make sure that the race is fair and open to all who want to compete, because competition always beats regulation as the way to bring consumers more services, better quality, and the lowest prices. So our job is to ensure that these bandwidth technologies that can improve the lives of American consumers are deployed in a pro-competitive manner. I believe that this is what Congress intended the FCC to do. Q: By your term's end in 2001 and beyond, what new ideas will converging technologies spawn? What are your thoughts on such rapid change? A: Trying to predict the future in the telecom world is always dangerous. By 2002 there may be advances in technology that we can't even imagine now. But I can tell you what I hope to accomplish at the FCC in the next few years. One thing I am sure of is that the future of the FCC and the telecom industry will be driven by competition, digitization and convergence. The FCC's immediate job is to foster and encourage the transition of the communications industry from a regulated to a competitive environment and clear the way for enormous technical innovation. A decade ago, few would have predicted the influence that Bill Gates and Microsoft would have on the communications marketplace. It's certainly a fast-changing landscape. Consider the debate on the Telecommunications Act of 1996 that took place at a time when the Internet was only just beginning to emerge as a phenomenon in telecommunications. Most anyone who connected to a commercial online service did so at a mere 9,600 bits per second. Building Web pages for a living seemed a risky proposition. Q: And now, in the wake of the 1996 telecom act, the FCC is in a historic new era, and ironically "growing larger to get smaller." How is this transition going; what lies ahead; what is the ideal scene for the shape and role of the FCC? A: When I became chairman, I said my tenure would be guided by the three Cs -- competition, community and common sense. My vision of the FCC in the future is one in which there is competition in all segments of the telecom marketplace, the telecom infrastructure serves to create a national and global community in which information is easily shared, and regulation, where necessary, is governed by common sense and is applied only when needed and is constantly refined to address changing conditions. In a fully realized competitive future, I also see a changed FCC. The commission can be smarter and leaner. Where we can be smaller, we should be, but we should not reduce size if it means undermining enforcement of rules necessary to protect competition, consumers and the public interest. As competition begins to develop, we can eliminate rules that become unnecessary. But the FCC must still referee the competitive marketplace. There are some areas, such as public safety, equal opportunity and consumer protection issues, that cannot always be left to marketplace forces. In these areas government regulation is and will continue to be appropriate. Q: How do you respond to those who go so far as to say common law should rule telecommunications and the FCC could be abolished? A: It shouldn't be a surprise that government can play a role in eliminating the digital divide. After all, the Information Revolution was started by public leadership and investment. Government scientists invented the Internet, which was the catalyst for Silicon Valley and other high tech corridors around the country. ... Can we really tolerate leaving our poorest communities behind, stranding poor kids in our most distressed inner city and rural areas in a technological desert? In this era of retrenchment in affirmative action, where the number of African Americans and Hispanics at the University of California at Berkeley and the University of Texas is the lowest in decades, can we really tolerate going down a path where the information haves become have-mores, while the information have-nots become have-nones? We can't do that. We must continue to help open the doors of opportunity... Q: Such opportunities can be created in part by fair competition. Specifically, what is your vision of how a competitive environment will be achieved? A: I see the FCC as having six key responsibilities as we move to a competitive environment: 1. Eliminate or mitigate bottlenecks and maintain a competitive market structure. The key to a "pro-competitive, deregulatory" communications policy is competition rather than monopoly. We must act to remove bottlenecks where the exercise of market power permits them to appear, and we must maintain a competitive market structure. This means establishing interconnection standards for telecommunications technologies where warranted, overseeing compatibility standards, and establishing the obligations, where necessary, of firms to extend services to others. 2. Deregulating communications services when consumers can choose the best combination of price, service and quality for their needs. This means writing fair rules of competition, eliminating and discarding regulations no longer necessary and finding sensible ways to regulate noncompetitive services that remain -- and having the wisdom to distinguish between the two. 3. Protecting consumers. As we move toward a competitive marketplace and encourage wider entry, we need to acknowledge that not all competitors are scrupulous, and not all means of garnering competitive advantages are fair to consumers, especially those consumers who are used to obtaining telecommunications services from regulated monopolists. 4. Promoting efficient use of the electromagnetic spectrum. Assuring that the spectrum is used efficiently and flexibly, and that those licensed to use it can do so free of unwarranted interference. Promoting efficient use does not, however, mean micromanaging that use. Experience has shown us that broad flexibility for licensees enhances efficient use of the spectrum and permits licensees and the marketplace to develop the products that consumers want. 5. Strengthening the community. Our communications laws have never reflected only economic efficiency. They have always embraced more: that communications services should be widespread, tie our communities together and help us build a stronger, more prosperous, and safer world with greater opportunity for all and opportunities for a wide range of voices to be expressed publicly. We must ensure that communications embodies the American values in the law: universal service to promote ubiquitous phone service and economic opportunity for all Americans, including rural areas, classrooms and rural health centers; access for people with disabilities; spectrum for public safety needs; elimination of market-entry barriers for small business and new entrants; and diversity of ownership and employment. 6. Advancing our guiding principles worldwide. Even when it established the FCC in 1934, Congress recognized that we needed worldwide communications services. The communications industry is truly global today. As the world leader in communications services and innovation, the U.S. sets the standard for promoting open and competitive markets. Q: Looking at the changes going on that have led to a smaller, more connected world, what do you see as vital to keep in mind as we move forward into a new era? A: Without a doubt, the main thing we must always keep in mind in formulating telecom policy is to ensure that everyone has an equal opportunity to participate in the exciting new telecom world that we'll see in the 21st century. We must never become a nation of information haves and have-nots, and the decisions being made in the next months at the FCC and on Capitol Hill will determine whether our country and world are separated by a digital divide or not. We can't let that happen. Victor Rivero is a writer based in Boston and Burlington, Vt. Email
<urn:uuid:47b909f0-e6e5-42e9-81b0-2a64ae846623>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Giving-the-Telecosm.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956474
3,469
2.671875
3
OCCURS Clause cannot be used in 01, 66, 77 and 88 levels. There are many reasons for that: 1. 01 Level items are usually record level items. OCCURS clause is used for specifying multiple occurrences of fields and NOT the records. 2. If OCCURS clause specified in 01 level, you cannot perform search and search all operations in an efficient manner. 3. In COBOL, Table is nothing but a ?Structure of Array?. So you have to specify the array (occurs) within a structure (01 level). 4. In COBOL59, 01 and 77 levels are aligned on a double word boundary. So if you have multiple occurrences for an item declared in 01/77 level, each item should start in a Word Boundary. So CODASYL didn't allow to use OCCURS clause in 01 & 77 level to avoid unnecessary slack bytes.
<urn:uuid:05a931e0-7e29-4291-b248-3046259f9c20>
CC-MAIN-2017-04
http://ibmmainframes.com/about1448.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00101-ip-10-171-10-70.ec2.internal.warc.gz
en
0.847205
194
2.828125
3
Where in the world is the geospatial data? Policy, technology combine to improve data management - By Doug Beizer - Jul 31, 2009 Agencies need to better coordinate their geospatial information needs to reduce duplicative efforts and provide more complete information, several experts recently told Congress. At a July 23 hearing before the House Natural Resources Committee's Energy and Mineral Resources Subcommittee, witnesses said the federal government’s approach to geospatial data is not well-integrated with state and local efforts. While experts say policy changes must be made to improve the situation, they add that emerging technologies might help agencies make better use of geospatial information. For example, Google recently launched a new version of Google Earth Enterprise, the technology used to build Google Earth and Google Maps, said Dylan Lorimer, product manager at Google Earth Enterprise. The product lets organizations use their geospatial information to build custom versions of Google Earth and Maps, Lorimer said. “A number of our customers have huge archives of aerial or satellite imagery over their areas of interest, so now we are allowing them to essentially build with all of that data and view it with Google Earth,” Lorimer said. For example, the South Florida Water Management District, a regional agency charged with managing and protecting water resources, uses Google Earth Enterprise to create a common picture of the state of Florida’s waterways. The agency uses aerial photography and information layers depicting structures, canals, district-owned lands, water use permits and environmental monitoring stations. Water officials throughout the district log on to the system to see all available information — such as canal levels — visually integrated in one place. The new version lets organizations easily publish custom globes to the Web in 3-D. The previous version allowed only flat, 2-D maps to be published to the Web. Anything more complicated required customized servers and programming. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:688191f5-71b8-4594-84cd-c2e03884f51a>
CC-MAIN-2017-04
https://fcw.com/articles/2009/08/03/week-google-enterprise-geospatial.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900011
406
2.5625
3
She's the fastest train on the line - Johnny Cash, Orange Blossom Special Paris and Lyon, Tokyo and Osaka, Madrid and Seville, Seoul and Busan - these cities have something in common. They're connected by what many believe is the future of transportation - high-speed rail. High-speed rail systems whisk passengers hundreds of miles in mere hours by traveling at speeds as high as 357 mph. That record, recently established by the French TGV (train grande vitesse or high-speed train), means the trains can move almost as fast as an airliner. And while most high-speed trains run slightly slower - around 200 mph - over the past several decades, they have proven their value, reliability and safety almost everywhere. Almost everywhere but here, that is. In China, engineers built the world's first Maglev - magnetic levitation - high-speed train. Operational since 2004, the train runs a 19-mile route in Shanghai between Pudong Shanghai International Airport and Shanghai's Lujiazui financial district, and covers the distance in seven minutes. In Germany, the InterCityExpress - known as ICE - rockets passengers across the country to major cities like Berlin, Hamburg and Munich. The Eurostar Italia ferries riders between cities like Rome, Florence, Naples and Turin at 186 mph. Plus, Italians are in the midst of constructing nearly 400 additional miles of high-speed railway. In Japan, high-speed trains known as Shinkansen have operated since 1964. The now expansive network of trains, tracks and stations crisscross the country and has served more than 6 billion passengers without any major safety issues. In the United States, high-speed rail systems have yet to leave the station. In fact, they have yet to leave the realm of wishful thinking. Despite high-speed rail's proven global track record, for some reason government - be it federal, state or local - is either unable or unwilling to get onboard. Many high-speed rail proposals exist, especially in large states with far-flung population centers, such as California, Texas and Florida, each of which announced plans for high-speed rail. The trouble is that these plans were created years ago, and not a single a mile of track has been laid. In other regions of closely grouped cities, similar plans now gather dust. There are designs for high-speed trains to service Midwestern cities, such as Chicago, St. Louis and Minneapolis. Likewise, a train connecting Washington, D.C., New York City and Boston has long been in the works. So far, the best the United States has been able to come up with is the woeful Amtrak system. Slow, expensive and chronically late, the heavily subsidized railway has consistently failed to meet expectations. Amtrak's problems, however, are hardly its own doing. The idea of Amtrak is a noble one - a nationwide passenger railway. Unfortunately Amtrak has been plagued by poor management, budget shortfalls and frequently late arrivals because most of the tracks it runs on are privately owned, which means freight takes priority over people. There are some bright spots within Amtrak - such as the Capitol Corridor that runs between Sacramento, and San Jose, Calif. There is also Acela, Amtrak's quasi high-speed rail line in the Northeast, running from D.C. to Boston. The train is capable of speeds approaching 150 mph, but due to outdated infrastructure and arcane regional speed restrictions, the train averages around 75 mph. Both lines boast far more ridership than other Amtrak routes, but neither can offer any service approaching true high-speed rail. Florida seems like the perfect location to build a high-speed rail system. Long and narrow with many large and distant cities, common sense would seem to dictate that Floridians would like an option besides airlines to quickly travel from Miami to Tallahassee. In fact, in 2000, Florida voters passed an amendment to the state constitution requiring the state to build a high-speed rail system. So why doesn't one exist? "In 2000, a gentleman by the name of [Charles] Doc Dockery in the Lakeland area, took it upon himself to push for a constitutional amendment [requiring high-speed rail be built] that he was able to put on the ballot with the appropriate number of signatures. It went on the ballot in 2000 and was approved by the Florida voters," said Nazih Haddad, staff director of the Florida High Speed Rail Authority. With the amendment in the state constitution, the Florida High Speed Rail Authority was created. Soon afterward, the authority went to work, looking first at a route between Tampa and Orlando. The authority believed a phased construction process would yield the best results and identified the Tampa-Orlando line as the optimal route to start with. After receiving two private-sector proposals in 2003, it was determined that the initial route would cost approximately $2.4 billion. All indications pointed toward Florida being the first state to finally build a high-speed rail system. But then, in early 2004, things began to unravel. The high cost of the rail system and its associated politics led to an effort to repeal the amendment passed just four years earlier. This effort, supported by former Gov. Jeb Bush, removed the mandate from the state constitution but left the rail authority in place. The repeal passed with 64 percent approval. Many have speculated that the repeal amendment was worded to confuse voters who had so recently voted yes on the very same issue. "Some people will tell you due to some confusion in how the ballot initiative was written, a lot of people who thought they were voting for high-speed rail were actually voting for the repeal of the constitutional amendment," Haddad said, adding that the governor had long been opposed to the idea of a high-speed rail. "The basic reasoning is it costs a lot of money - but any major transportation infrastructure costs a lot of money," Haddad explained. "They were afraid the partnership with the private sector was not going to yield the benefits it [promised]." Once virtually set in stone, the promise of a Florida high-speed rail system has all but died. The rail authority last met in June 2005. Florida isn't the only story of a high-speed railway that nearly came into existence before being snuffed out by political wrangling. In a previous Government Technology article (Transportation's Plan B, February 1992) it appeared that high-speed trains would be cropping up everywhere. "The era of the Interstate Highway System is over," Roger Borg of the Federal Highway Administration was quoted as saying. The story referenced a high-speed rail system in Texas that may have been even closer to being built than the Florida project. The Texas High Speed Rail Authority had, by 1992, awarded a $5 billion contract to a consortium known as Texas TGV. Texas TGV was headed by Morrison Knudsen, known as Washington Group International (recently acquired by URS) and was planning to use French TGV trains to service an area called the Texas Triangle - Houston, San Antonio and Dallas-Fort Worth. The Texas project was to be built entirely with private-sector money. When our article ran, the train was scheduled to begin operating by 1998. For Texas TGV, all that had to be done was to raise the funds necessary to begin construction. Yet no tracks were ever laid and no trains were ever delivered. Why? Enter Southwest Airlines. The low-cost, Texas-based airline would have faced significant competition from a high-speed train, and the company invested in a massive lobbying and public relations campaign to discredit high-speed rail in Texas. It succeeded, and the project was scuttled in 1994, according to records of the Texas High Speed Rail Authority in the Texas State Archives. A decade before that, California took its first stab at building a high-speed rail system. In 1982, Gov. Jerry Brown signed AB 3647, which called for $1.25 billion in tax-exempt bonds to build a Shinkansen-style train that would be managed privately and operated for profit. But by leaving the California Department of Transportation out of the loop, the proposed train drew the ire of many in government. In addition, ridership projections were found to have been largely overstated and connections with mysterious Japanese contractors led to a loss of faith in the project, which quietly died in 1983. Back in those days, Mehdi Morshed and his wife Linda were the chief transportation consultants for the California Legislature. Though unable to make high-speed rail a reality in 1983, Mehdi Morshed would get another chance in 1996 when Gov. Gray Davis created the California High-Speed Rail Authority (CHSRA). Mehdi Morshed was appointed executive director of the authority and has been working once again to bring high-speed rail to the Golden State. But Mehdi Morshed and the CHSRA face yet another crossroads. They reportedly need $103 million to continue paying project engineers and to buy rights of way. Yet Gov. Arnold Schwarzenegger is offering only a fraction of the money requested - about enough to keep the lights on - until the CHSRA presents a way to fund the estimated $40 billion it costs to build a high-speed rail network that connects San Diego and Los Angeles to San Francisco and Sacramento. "We needed the $14 million for this fiscal year to hire the engineers and get mobilized and get ready," Mehdi Morshed explained. "We need $103 million next year to continue that work, and then we'll need somewhere around $200 million the next year." A bond, which is set to be on the November 2008 ballot, would secure nearly $10 billion to begin construction. This, Mehdi Morshed said, is the cornerstone of building a high-speed rail network that would allow people to move between Northern and Southern California in under three hours, finally giving residents a long sought-after alternative to expensive flights or the grueling six- to eight-hour drive along Interstate 5. But the bond measure has been postponed twice, and Schwarzenegger is threatening to postpone it again. "We've been funded annually by the Legislature from existing transportation funds. When we go to construction, it's going to require far more money than they can support with the existing budget," Mehdi Morshed said. "The Legislature and the governor proposed the $9.95 billion bond measure," he said. "It's been postponed twice for a couple reasons. One, they wanted other priority projects to move forward; two, the high-speed rail project wasn't really ready to go into construction so the bond money wasn't needed at the time. For 2008, it's different. Now we actually need the money because in a couple of years, we'll be ready to go into construction." On the surface, supporting high-speed rail seems like a no-brainer for Schwarzenegger. The Golden State governor garnered considerable press for his sudden shift toward promoting green policies. According to studies conducted by the CHSRA, the train would serve nearly 117 million passengers by 2030 while generating annual revenue of between $2.6 billion and $3.9 billion. In addition, it would cost two to three times less than expanding highways to accommodate the same need. High-speed trains, which run on electricity, could also have a potentially huge positive impact on California's air quality. Ardent supporters say the train would eventually pay for itself, and even the most pessimistic are forced to admit that the train would generate more money than highways, which cost millions annually to maintain and repair. One would think it should be a slam-dunk for the suddenly green-loving governor. Not exactly. According to Sabrina Lockhart, a spokeswoman for Schwarzenegger, the CHSRA must explain how it plans to raise the additional $30 billion before the governor throws his weight behind high-speed rail. "He is asking the Legislature to indefinitely delay putting this $10 billion bond on the ballot in November ," she said. "What he is waiting for is for the [California] High-Speed Rail Authority, which is the body that is responsible for developing the plan for high-speed rail in the state, to come up with a comprehensive financial plan for building the system. "It's expected to be more than $40 billion so what the governor is essentially saying is, 'Before we ask California's taxpayers to mortgage $10 billion plus interest, we have to know where the remaining $30 billion is going to come from,'" Lockhart said. For months, rumors circulated that the governor planned to snuff out high-speed rail once and for all. Schwarzenegger wants "to quietly kill this - and not go out and tell the people that high-speed rail isn't in the future," state Sen. Dean Florez told the Los Angeles Times in April. But suddenly in May, Schwarzenegger penned an editorial in The Fresno Bee where he appeared to have shifted his position on high-speed rail. "I strongly support high-speed rail for California, and especially for the San Joaquin Valley," the governor wrote. But he also added, "Before asking taxpayers to approve spending nearly $10 billion plus interest, it is reasonable to expect the authority and its advisers to identify with confidence where we will find the remaining $30 billion." Herein lays a classic example of government bureaucracy. The CHSRA says it needs $103 million to continue its work on planning the rail system. The governor says he'll support high-speed rail if the authority comes up with a way to pay for it, but in the meantime, cuts its budget to the point the CHSRA claims is barely enough to keep their doors open. Adding to the quagmire is the fact that to drum up any private investment, backers will likely need to present proof that California voters support building the railway. But unless the bond goes on the ballot, it will be difficult to prove such voter support exists. Despite these obstacles, Mehdi Morshed remains cautiously optimistic that there is a future for high-speed rail in California. "If you were to follow what the governor suggested, then basically the project will be put on hold, and probably for all practical purposes it won't be going anywhere," he said. "Based on what I hear and the people in the Legislature we've been talking to, there's a very strong desire on the part of the California Legislature to continue the funding for the project, and there didn't seem to be a great deal of support - virtually no support - for the governor's proposal to postpone the bond. The Legislature doesn't seem to be inclined to go along with what the governor wants." At a May 23 board meeting, however, the CHSRA may have shot itself in the foot. Presented at the meeting was a plan for a phased construction process. The phasing plan, should the rail bond be approved, would call for initial construction of the track to run from Anaheim, in Southern California, to San Francisco in the North. By choosing this lower-cost strategy, the board is obviously hoping to improve the chances of making high-speed rail a reality. Unfortunately such phased construction entirely omits the San Diego and Sacramento metropolitan areas - nearly five million voters who would be asked to approve a $40 billion project with nothing but the promise of a rail extension to come years later. As voters in these cities look around for the promised freeways that were never built, such a sell would be difficult. As California aptly demonstrates, high-speed rail projects need a high-profile advocate. The various rail authorities are simply not enough to make these railways a reality. Rick Harnish is the executive director of the Midwest High Speed Rail Association, a nonprofit advocacy group trying to spark interest in a high-speed railway that would connect major Midwestern cities. Harnish said people should demand that government step up and provide alternative transportation options. He added that if California made it happen, it would be far easier for high-speed rail to flourish elsewhere in the country. "It's not impossible, and people need to tell their legislators they need real travel choices," he said. "In a national sense, California is so important because if that system could get built, it would prove the case. The key is people throughout the country need to start telling their elected officials they want high-quality train service and they expect their elected officials to come up with a solution to make it happen. If the governor said we're going to link L.A. to the Bay Area within five years, it could be done very quickly and at a fraction of the cost of comparable highway capacity." Mehdi Morshed voiced similar sentiments - despite having only 1 percent of the requested $103 million approved by the governor. "The cost of a high-speed train is $40 billion, and that's a lot of money," he said. "But, over the same period of time that we're talking about building a high-speed train, California is going to spend more than $200 billion on highways and other transit modes in the state. Relative to all the other expenditures, it's not that huge of a change." High-speed rail in the United States has failed everywhere it has been proposed. Some blame an addiction to the automobile. Such an argument is easily disputed by the fact that most people have no choice but to use a car. Most, however, point to a lack of political will. And as Mehdi Morshed said, where would we be today without those who took risks in the past? "Look 20 years down the road; look at where your state is going to be; look at your children and grandchildren," Morshed said. "What are you going to do about their mobility and air quality? Are you going to leave them high and dry? Or are you going to do something to prepare for them, just like people before prepared for us?"
<urn:uuid:3a4a8d11-4989-42f7-80fa-6a28c5a8e243>
CC-MAIN-2017-04
http://www.govtech.com/featured/Fast-Track-to-Nowhere.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974501
3,703
2.53125
3
Error handling is what the Web server does in the event a request is made resulting in an error. For example, if you try to go to a page that doesn't exist on a server you will see the all too common ``Error 404: File not found.''. In this menu you can list the error number and tell Apache to load a specified Web page or display a specified message if this error is encountered. Below are a list of common error codes and their meanings. You can refer to the Apache documentation for a complete list of error codes. |403||Forbidden / Access Denied| |404||File Not Found| |405||Method Not Allowed| |500||Internal Server Error|
<urn:uuid:7cbac901-4405-49b8-a44c-e92a28725d69>
CC-MAIN-2017-04
http://infocenter.guardiandigital.com/manuals/SecureProfessional/node75.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.773918
145
2.734375
3
We are living in an increasingly interconnected world, and the so-called Internet of Things is our (inescapable) future. But how safe will we, our possessions and our information be as these wired and interconnected devices begin to permeate our lives? The current situation is not satisfactory, as HP’s researchers have discovered. Using standard testing techniques, they have analyzed 10 of the most popular TVs, webcams, home thermostats, remote power outlets, sprinkler controllers, hubs for controlling multiple devices, door locks, home alarms, scales and garage door openers, and have come across “an alarmingly high average number of vulnerabilities per device.” The flaws included the Heartbleed bug, DoS vulnerabilities, weak passwords, lack of encryption, and so on. All the tested devices had mobile applications though which they could be accessed and controlled remotely, and most included some form of cloud service. The researchers found that: - Web interfaces of six of the 10 tested devices are vulnerable to cross-site scripting, have poor session management and weak default credentials - 80 percent of the tested devices failed to require passwords of sufficient complexity and length, and 70 percent of the devices with the cloud and mobile app allow attackers to identify user account through account enumeration. - 90 percent of devices collected at least one piece of personal information via the device, the cloud or the device’s mobile app - 70 percent don’t use encryption when transmitting collected data that might be sensitive via the Internet and the local network - 60% of devices displayed software and/or firmware issues. Mark Sparshott, Director of EMEA at Proofpoint, pointed out that while current bots typically send thousands of phishing emails in different campaigns, which allows defenders to identify and blacklist them, future IoT botnets will be 100 or 1,000 times larger. “It is conceivable that a future IoT bot could send just 1 phish and never appear on any reputation block list,” he noted. “The IoT and the increasing use of zero-day threats to bypass signature-based security systems means that enterprise security strategies have to evolve to leverage cloud based dynamic sandboxing and malware analysis as well as focus on reducing the time to remediate the inevitable breach through automated security response.” In order to minimize the risks, HP researchers have advised manufacturers to test and secure their devices and its various components. “Implement security and review processes early on so that security is automatically baked in to your product,” they suggested, and added that implementing security standards and keeping to them will significantly improve their product’s security posture.
<urn:uuid:95794d7c-c141-4b2f-9dd5-5e3457a724f2>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/07/30/iot-devices-are-filled-with-security-flaws-researchers-warn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929667
549
2.546875
3
The Conficker worm, which has infected as many as 15 million computers according to some estimates, may do any number of things come Wednesday. Conficker-infected machines could be used for sending spam, logging keystrokes, or launching denial of service (DoS) attacks. But security experts are not predicting any widespread damage. The greatest impact they say the worm may have is to slow networks to a crawl as copies of the worm in infected machines search a list of 50,000 domain names for instructions indicating what to do next. According to the United States Cyber Emergency Readiness Team (US-CERT) the worm can infect Microsoft Windows systems from thumb drives, network share drives, or directly across a corporate network if network servers are not protected by Microsoft's MS08-067 patch. The US-CERT has developed a tool that can help state and local governments detect and remove the Conficker/Downadup worm from their computer systems. Developed by the United States Computer Emergency Readiness Team (US-CERT), the tool is available to state partners through the Government Forum of Incident Response and Security Teams (GFIRST) portal. Experts briefed state and local CIOs and chief information security officers today, the the Department of Homeland Security said in a press release. "While tools have existed for individual users, this is the only free tool - and the most comprehensive one - available for enterprises like federal and state government ... to determine the extent to which their systems are infected by this worm," said Mischel Kwon, US-CERT director. US-CERT recommends that Windows users apply Microsoft security patch MS08-067 (http://www.microsoft.com/technet/security/Bulletin/MS08-067.mspx) as quickly as possible to help protect themselves from the worm. This security patch, released in October 2008, is designed to protect against a vulnerability that, if exploited, could enable an attacker to remotely take control of an infected system and install additional malicious software. US-CERT advised home users that the presence of the worm may be detected if users are unable to connect to their security solution's Web site or if they are unable to download free detection/removal tools.
<urn:uuid:5261661e-6ccc-42eb-b2b3-f9711a466b30>
CC-MAIN-2017-04
http://www.govtech.com/security/New-Tool-Detects-Conficker-Worm-on.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00083-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927358
460
2.609375
3
When we created a table without a unique index, we were not able to perform any operations on the table ( we were not able to view the table at all ). In this respect i would like to know why a unique index is compulsorily required while creating a table in DB2? If you create a table it is not mandatory to create a unique index. But If the table contains Primary key, we must create the unique index.Otherwise the table definition is incomplete. So you can not access the table. I think you got me...Pls verify again and let me know.. If you create a table with primary key, then u should create a unique index on a particular column , other wise you can?t enter a values in that particular table i.e., with out entering in a table we never see the table records. I believe that with out creating a unique index on a primary key table,the sqlcode of -803 will come.
<urn:uuid:122687a5-cd86-4f97-aa20-a074965facba>
CC-MAIN-2017-04
http://ibmmainframes.com/about5101.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.870795
199
2.53125
3
Researchers install environmental monitoring equipment at ancient Buddhist site Friday, Jul 19th 2013 In order to help preserve one of the oldest and grandest Buddhist sites in China, researchers recently installed environmental monitoring equipment at the Bingling Temple Grottoes, the Xinhua news agency reported. The Bingling Temple Grottoes are a series of caves located in central China that hold a number of historically significant Buddhist statues and murals. According to Xinhua, the grottoes were first established around 420 B.C. on the Silk Road, the noteworthy trade route that used to be the key overland connection between China, the Middle East and Europe. Chinese authorities are working to have the Bingling Temple Grottoes as well as other notable points along theold Silk Road be classified on the World Heritage List. How environmental monitoring facilitates historic preservation Although the grottoes remain even more than 1,000 years after they were first created, their future is not guaranteed due to a number of external variables that can degrade the statues and murals over time. In particular, excess humidity, whether from additional water vapor generated by tourists visiting the region or from the air, can erode the rock. To better track and monitor the effects variables such as heat, humidity and carbon dioxide have on the grottoes, Xinhua reported that researchers recently installed environmental monitoring equipment on 20 of the 183 caves within the temple complex. "The data will help us analyze the impact of visitors and weather on the caves' environment," said Shi Jingsong, head of an institute in charge of protecting the grottoes, according to Xinhua. One of the reasons why the statues and murals have been able to survive for so long is because the air in the region is naturally devoid of moisture and because the top part of the cliff protects the grottoes from rain and sunlight, according to the Lanzhou Institute of Chemical Physics and the Chinese Academy of Sciences. This is especially notable in this instance because the temple is made from sandstone, which is more porous and brittle than many other natural building materials. The issues presented by sandstone have also plagued researchers working in other parts of the world, Geotimes reported. For example, humidity threatens to degrade the ancient structures in Petra, the city located in modern-day Jordan that served as a backdrop for scenes from the original Indiana Jones trilogy. Sandstone is more absorbent than other rocks, so it more easily absorbs the moisture that will eventually degrade its surface. "What I found was that the greatest weathering was on western faces, where you get wetting from storms, rain and sun." Tom Paradise, a geomorphologist from the University of Arkansas at Fayetteville, told Geotimes about Petra. "It's these little tiny super-frequent events, like wetting and drying from dew every morning, that causes the sand to disaggregate." This problem is also present in Afghanistan's Bamiyan Valley, which has housed Buddhist statues and fresco art for approximately the last 17 centuries, Geotimes reported. Although political issues have caused more damage than any other factor, the soft sandstone that facilitates the easy carving of artwork also exposes the area to external damage over time. Although its effects are especially pronounced with sandstone, excess moisture can erode any rock surface over time. As such, humidity monitoring equipment is one of the best tools researchers can use to protect historic sites like the Bingling Temple Grottoes.
<urn:uuid:fe44c067-5724-474e-b8e6-f25941b14ae4>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/researchers-install-environmental-monitoring-equipment-at-ancient-buddhist-site-474749
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954361
708
2.75
3
Introduction by George Kupczak of the AT&T Archives and History Center On an elementary conceptual level, this film reflects the multifaceted scientific hyperthinking that was typical of a Bell Labs approach. Host Dr. J.N. Shive's presence as a lecturer is excellent - it's understandable by a layperson even when he branches into equations, because he uses copious amounts of real-world examples to bolster the material. Shive's role at Bell Labs was more than just a great lecturer: he worked on early transistor technology, inventing the phototransistor in 1950, and the machine he uses in the film is his invention, now called the Shive Wave Machine in college classrooms. Dr. J.N. Shive of Bell Labs demonstrates and discusses the following aspects of wave behavior: - Reflection of waves from free and clamped ends - Standing waves and resonance - Energy loss by impedance mismatching - Reduction of energy loss by quarter-wave and tapered-section transformers Original audience: college students Produced at Bell Labs Footage courtesy of AT&T Archives and History Center, Warren, NJ
<urn:uuid:d3ed4018-83a2-4427-b221-010b7169a66d>
CC-MAIN-2017-04
http://techchannel.att.com/play-video.cfm/2011/3/7/AT&T-Archives-Similarities-of-Wave-Behavior
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935696
238
3.3125
3
Seems like everybody here in the U.S. is down on the government these days, and with good reason. Between the NSA-spying-on-us thing, the recent government shutdown thing and the whole HeathCare.gov thing, people aren’t too thrilled with the jobs our leaders are doing. Well, the good news is that civic hackers now have a new way to enable people to find and give their elected representatives a good what-for. This week Google announced a significant enhancement to its Civic Information API. The API was launched last year, and has been used to provide information and apps on polling places and election issues. Now, the API includes data on elected representatives in the United States, from the federal government down to local officials (Google plans to eventually collect data on government jurisdictions and representatives in other countries). Using this new data and functionality, developers can, in theory, create apps to let the voters know who their representatives are and to help constituents get in touch with them. A part of this effort, Google, in conjunction with a number of other organizations such as the Sunlight Foundation, has helped to develop Open Civic Data Identifiers (OCD-IDs). These OCD-IDs are a new open standard for identifying government jurisdictions, people, events, bills, etc. The hope is that governments and developers will start using this standard, which will make it much easier to connect information in different civic data sets. For the Civic Information API, for example, Google used Open Civic Data Division Identifiers. If you fancy yourself a civic hacker, you can not only use these new data to create great new apps for voters, but you can also contribute your coding skills to the open civic data cause, by creating a new scraper to collect data. Non-developers can also help by manually curating and editing civic data. Governments, of course, can also lend a hand by using these new IDs and publishing data in open formats Hopefully, all of this will help give the common folk more of a voice in government and help prevent things like a government shutdown. Unfortunately, though, I’m not sure even open civic data can do much to prevent an elected official from going insane. Open data can't do it all... Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:7963765f-e0df-490c-82b5-8ea57a293c84>
CC-MAIN-2017-04
http://www.itworld.com/article/2702725/cloud-computing/an-api-for-finding-and-berating-your-elected-representatives.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00413-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933338
514
2.515625
3
Brave New Network: How IT Departments Will Enable Artificial Intelligence As artificial intelligence continues to move forward at a rapid pace, so too will demands on IT departments. Artificial intelligence systems and their development are becoming increasingly important. "Smart" devices, self-driving cars, and robots are all gaining more attention. We are now at the point where people – blue-collar and white-collar alike – are worried about losing their jobs to robots. Like it or not, however, AI is the future – for the industry, for enterprise, and for society. Of course, all of this will mean new headaches for the CIO, the IT administrator and the network engineer alike as the ongoing evolution of AI places increasing demands on the networking industry. Forgetting, for a moment, how one successfully programs and achieves artificial intelligence, there are very real data processing and network concerns that have to be addressed. It's one thing to program artificial intelligence – but how do you build it? How do you house it? How do you process it? What does the AI-enabling network of tomorrow look like? AI networks will use the cloud The cloud – known for its ability to offer agility and added storage – is already enabling AI to some extent. IBM, for instance, maintains Watson in the cloud partly for accessibility issues and partly because of the sheer amounts of data that Watson must effectively contend with. For artificially intelligent robots, however, the cloud is a necessity. Onboarding everything a mobile robot will ever need is impractical for reasons of power requirements, operational duration, and cost. The cloud has other benefits for robots, too. "Cloud robotics allow robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real-time requirements," reports robotics project team RoboEarth. Indeed, artificially intelligent robots are already using the cloud. A 2013 UC-Berkeley project outlined how robots could better grasp objects through cloud enablement. Companies such as Gostai offer customers cloud access to a wealth of complex robotic actions, "including … advanced vision and speech algorithms." And last year's DARPA Robotics Challenge – which tested robots in simulated disaster scenarios – used a cloud infrastructure over a VPN connection for added resiliency. AI networks will store lots of data Rob High, VP and CTO of IBM Watson, told students and technologists during his keynote address at MIT's annual Tech Conference earlier this year that we are presently experiencing an "information explosion." Identifying the Internet of Things as a significant contributor, High noted that, at present, the world produces approximately four exabytes of new data each day. "We really do need computers to help us," High argued. Specifically, said High, we need cognitive computing – an advanced form of machine learning that is vital to AI. "Cognitive systems have to learn ... because that is what it takes to deal with all the variations we deal with as human beings[,]" said High. "You can't sit down and write all the rules of language and ever feel like you've completed the task. Our language is far too diverse and varied for that." But even cognitive AI itself presents is own data-management problems, requiring advanced interactive solutions. "Deep learning typically requires a significantly larger data set for training," reported High, who said the technology is “changing the paradigm of training to one of 'give us all your training data' ... to 'now let's put it in this system of interaction' ... so it can kind of learn on the job, if you will." AI networks will be fast "To read all of the [medical research] data being published on a weekly basis would require about 160 hours of reading a week," said High, who went on to point out that the average doctor spends only about five hours per week reading medical research. (For those of you who don't feel like doing the math, a week contains 168 hours.) As a counterpoint, High related Watson's early days as a Jeopardy contestant in 2011. "We had about 200 ... pages of literature that Watson had to read at the [time] the question was being asked," said High. "It had about three seconds." Watson went on to beat two of Jeopardy's biggest champions ever. The challenges – and the necessary speed – of AI systems like Watson are yet more compounded today because our information is becoming exceedingly difficult to process. Unstructured datasets (e.g., video recordings) are inherently problematic to search and analyze. Structured data, too, can be difficult to break down and process mathematically because they contain "human forms of expression." These forms of human expression – including tonality and body language – inform human intelligence in real time and allow us to react. "I will intuitively and subconsciously react to [your cues] and try to adjust what I'm saying," said High of human intelligence and communication. "We all do this." Accordingly, faster connections and powerful processing will be imperative for IT because AI systems must be able to interpret our data quickly – and act on it. Joe Stanganelli, Principal of Beacon Hill Law, is a Boston-based attorney, business consultant, writer, speaker, and bridge player. Follow him on Twitter at @JoeStanganelli.
<urn:uuid:a331e386-0fbb-474c-9d6e-715493a88000>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/datacenter/how-it-departments-will-enable-artificial-intelligence.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956837
1,091
2.53125
3
1. An EMPLOYEE table has following fields: EMP_ID, SALARY. a) Write a query to display those employees that have salary > average salary of all employees. b) Write a query to display count of those employees that have salary > average salary of all employees. 2.EMPNO ENAME JOB MGR HIREDATE SAL DEPTNO 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 30 DEPTNO DNAME LOC 10 STORE CHICAGO 20 RESEARCH DALLAS 30 SALES NEW YORK 40 MARKETING BOSTON a). List all employees who are working in a department located in BOSTON: b). List all those employees who are working in the same department as their manager c). Retrieve the minimum and maximum salary of clerks for each department having more than three clerks.
<urn:uuid:9f14fb82-3ba3-4892-8aa0-8d152b83d7d6>
CC-MAIN-2017-04
http://ibmmainframes.com/about25249.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867756
228
2.875
3
GIS is the archenemy of West Nile Virus and all mosquitoes in Valdosta, Ga. The city's Mosquito Population Control Program records data from about 35 mosquito traps within the city limits, each trap set in one-mile radius intervals. Students from Valdosta State University collect the mosquitoes each week to count, type, and test them for West Nile Virus - a potentially deadly affliction contracted primarily by the bites of infected mosquitoes. The data is then entered into a GIS tool that directs John Whitehead, the deputy city manager for operations, where to spray for mosquitoes. "The GIS system has helped us tremendously to pinpoint where we need to put our resources, instead of just going out, like in the past, when we were spraying the entire city every week," Whitehead said. Spraying only targeted areas saves the city roughly $70,000 annually in cost avoidance. The GIS and spraying program now costs $30,000 each year. Reductions in labor, overtime pay, and chemical and vehicle expenses produced the savings, said Whitehead. A child in Valdosta contracted West Nile from a mosquito in 2001, which drove Whitehead to create the GIS tool in collaboration with experts at Valdosta State. There are no known cases in Valdosta since then of humans testing positive for West Nile. The initial goal of the GIS tool was to spot earlier the mosquitoes that are carrying West Nile, but the presence of fewer mosquitoes was an additional benefit. Whitehead said that West Nile Virus always appears in the same location, which enables him to focus his spraying efforts. "I immediately go aggressively into the chemical spraying. There are two components to a program. You have your larvaciding and your adulticiding," Whitehead said. The GIS program matches each trap with data on nearby nursing homes, recreational facilities, schools, and day-care centers - places where at-risk people congregate. South Georgia is home to many honeybee growers, and the GIS program also shows Whitehead where to avoid spraying bee farms.
<urn:uuid:70fc37e1-708e-4d9a-ac59-e92e63f60870>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/GIS-Tracks-West-Nile-Virus-and.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961408
423
3.25
3
Changing the Fate of Those In SpaceBy Tom Steinert-Threlkeld | Posted 2003-02-10 Email Print Put the right information and analysis in from of the people who can act on it. There will be more questions than answers for weeks, if not months, to come. But one conclusion is certain, regardless of what happened on the outside or inside of the space shuttle Columbia. Technology has its limits. Information systems have their limits. Human analysis, foresight and insight have their limits. This is not the last time a set of explorers will perish in a journey to or from outer space. Take the foam. This was the piece of insulation on an external tank that appeared, according to NASA, to "impact the orbiter" on its left wing during liftoff. In the 48 hours after the breakup, signs seemed to point to intense heat on the left side, indicating tiles there were not doing their job of shielding against the 3,000-degree temperatures of re-entry. Shuttle program manager Ron Dittemore said the prospect for damage to the tiles was evaluated and discounted by experts shortly after it occurred, early in the orbiter's flight. NASA and its experts, he said, understood tile. The foam was being discounted again, four days after the tragedy. But debris is a perpetual problem. Several years ago, insulation was erupting like popcorn, "impacting" tile. The damage was superficial and the problem fixed. Yet, a couple of flights ago, debris was shedding again. The red flag should still be raised. "Two occurrences in the last three flights is certainly the signal to our team that something has changed." Dittemore said, after the Columbia broke up. What has changed is still not known. And as this issue goes to press, the cause of the disaster remains a mystery. Which makes one of the big unanswered questions: If the best scientists in the world can't link independent variable x (the cause of a problem) to dependent variable y (the safety of a vehicle) from available data, shouldn't more analysis be performed on the vehicle itself? This can start with visual inspection. In this flight, the crew was not fully trained in space walking. NASA had determined that asking a crew in space to replace tile might end up causing more damage than it fixed. So crews aren't sent up with the knowledge, preparation or repair kits to address this eventuality. Of course, there is an international space station orbiting the earth. One of its purposes should be to act as a mechanic shop in the sky. But this space shuttle was not built with the proper means of attaching to the station, NASA says. So this was not an option. In this instance, NASA wanted the shuttle crew to take pictures of the tank, to understand exactly where the foam was shed. Then, when they got the hand-held film back, upon return to ground, the pictures would be analyzed for a flight readiness review. Why, pray tell, are we still waiting for hand-delivery of film? And later analysis? The means should be on board for such images of any potential point of vulnerability to be taken and sent back immediately. We also live in an era of sensors that can grab all sorts of information and send it back in a constant stream. Shuttles already use lots of them. Yet the real event that raised concern was when the sensors went off, like the cutting of wire on a telephone. A day after the Columbia was lost, NASA acknowledged temperatures on the shuttle's left fuselage escalated 60 degrees six minutes before the vehicle broke up. In this day and age of double gigahertz processors and cheap-as-dirt detectors of all types, we should be able to feed huge amounts of data into an intelligent system that generates not just alerts for humans on the ground and in the sky, but answers and potential prescriptions that help crews prevent conditions from getting worse. This event suggests our early-warning and data collection-systems are not early or accurate enough; and should be part of a careful re-examination of how we conduct flights in space.
<urn:uuid:1b1c324a-a7a3-4ce6-a748-623333038202>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Data-Analysis/Changing-the-Fate-of-Those-In-Space
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9616
853
2.5625
3
IBM Delivers Free Math App for iPad Celebrating math, IBM has produced a new math app for the iPad called Minds of Modern Mathematics. The new app re-imagines a classic 50-foot infographic on the history of math. The husband-and-wife design team of Charles and Ray Eames created the infographic, which was displayed at the 1964 Worlds Fair in New York City. The app, which can be downloaded from the iPad App Store, is an interactive experience for students, teachers and tech fans that illustrates how mathematics has advanced art, science, music and architecture. It reinvents the massive timeline on the history of math from 1000 AD to 1960 that was part of "Mathematica: A World of Numbers...and Beyond," IBMs historic Worlds Fair exhibit. Users can click through more than 500 biographies, milestones and images of artifacts culled from the Mathematica exhibit as well as a high-resolution image of the original timeline poster. The app also includes the "IBM Mathematics Peep Show," a series of playful, two-minute animated films by Charles and Ray Eames that offer lessons on mathematical concepts, from exponents to the way ancient Greeks measured the earth. The app, developed by IBM together with the Eames Office (which works to preserve and extend the legacy of Charles and Ray Eames), is debuting during the centennial year of Ray Eames' birth. Mathematics remains essential to IBMs technological innovation. As demand grows for real-time analysis of information gathered from sensors in roads and power grids and other sources of big data, IBM mathematicians are working on everything from the "Jeopardy"-winning computer Watson, to astrophysics, weather forecasting and genomics, to using analytics to ease traffic congestion and power consumption in cities around the world. IBM claims that it maintains the largest mathematics department in industry. The company expects the app to be used in classroom settings and beyond to spur interest in education and careers around STEM (science, technology, engineering and math). Careers of the future will rely heavily on creativity, critical thinking, problem solving and collaboration, all themes that were core to the Minds of Modern Mathematics movement and remain equally relevant today, Chid Apte, IBM director of analytics research and mathematical sciences, said in a statement. What better way than a mobile app to reintroduce this timeless classic to inspire a new generation of learners?" Eames Demetrios, director of the Eames Office, said of his grandparents, "We've taken Charles and Ray's original two-dimensional design and turned it into a compelling, interactive experience that anyone can enjoy. The original content, which has been transformed for the iPad, is still as hypnotic and engrossing as it was 50 years ago." From the earliest days of their collaboration, IBM and Charles and Ray Eames shared a commitment to popularize math and science and make them accessible for all. In 1961, IBM sponsored an exhibition at the California Museum of Science and Industry in Los Angeles, commissioning the Eameses to develop an interactive installation on mathematics. Called "Mathematica: A World of Numbers...and Beyond," the 3,000 square foot installation inspired a generation to embrace science, math and technology. The popularity of the exhibit culminated in a replica being exhibited at the 1964 New York World's Fair. The large-scale mathematics timeline is still on display at the New York Hall of Science in Flushing, N.Y., and the The Museum of Science in Boston, and stands as an example of IBMs and the Eames vision for interactive learning and design. A smaller, poster-sized version of the timeline is still displayed in classrooms and museums around the world.
<urn:uuid:7be1ca02-8168-4f35-a9eb-4902b0a84922>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/IBM-Delivers-Free-Math-App-for-iPad-178572
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94564
766
2.734375
3
The University of California, Santa Barbara has just announced the creation of a new research center that will focus on developing the technology behind a new generation of Ethernet — and perhaps a whole new era for computation in the cloud. Not only will the team seek to provide 1 Terabit Ethernet over optical fiber by 2014, it will do so with the target of extreme efficiency in mind. The Terabit Optical Ethernet Center (TOEC), which has found willing industry partners in Google, Intel, Verizon and others, will, according to the release, “build on UCSB’s expertise in materials, advanced electronics, photonic integrated circuit technology, silicon photonoics and high-speed integrated optical and electronic circuits” in order to bring this vision to life. According to Daniel Blumenthal, professor of Electrical and Computer engineering at UCSB and director of TOEC, the goal “is to make energy-saving technologies that will allow applications and the underlying networks to continue to scale as needed. You could think of it as greening future networks, and the systems that rely on those networks.” Blumenthal admits that to reach the lofty goal, there are multi-disciplinary efforts that must culminate to meet the group’s aims, including the development of breakthrough technologies that go far beyond general networking and engineering. This is part of the reason why UCSB has partnered with a range of industry leaders, including Intel, who will be working to develop new strategies involving silicon photonics to create the energy-efficient devices.
<urn:uuid:ef2ca628-08a4-4a0e-9559-181210687972>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/10/21/uc_santa_barbara_aims_to_deliver_1_terabit_ethernet_by_2015/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947386
308
2.890625
3
Engagement is an important part of learning. Guidance, inspiration, and general interest lead to successful learning outcomes. It is crucial that district administrators and teachers create an interactive and collaborative learning environment for students to become engaged and involved in the curriculum. Below are some recommendations for how to use technologies to enhance learning in the digital classroom and create a successful and engaging learning environment. - Use interactive learning tools that engage students by leveraging the rich Apple ecosystem of education apps, as well as, the iBookstore for digital textbooks. - Provide guidance in the classroom by leveraging technologies like Apple’s Guided Access, so that teachers can focus student iPads on specific learning apps and websites. - Create a personalized learning experience by providing self service technologies for students using important attributes tailored towards student performance, learning style, and demographics to get the content to them that they will engage with and benefit from the most. Also, leverage iBeacons to extend these capabilities and enable distribution of apps and content to students based on proximity to buildings, classrooms, or libraries. - Create a transformative learning experience for students by leveraging the power of technologies like iPad to engage students’ senses and provide adaptive learning programs, tactile learning, and interconnected education and social learning tools. Learn more about ways the digital classroom is changing the way teachers are teaching and students are learning.
<urn:uuid:4bb5a98a-e228-4be2-a965-18377050b3d2>
CC-MAIN-2017-04
https://www.jamf.com/blog/four-ways-to-engage-students-in-the-digital-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00028-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930461
272
3.890625
4
For a long time only “structured” data could be analyzed in databases. But what about data that doesn’t come in convenient rows and columns, like the Human Genome data? For a long time, it was only so-called “structured” data that could be analyzed in databases. This kind of data comes in rows and columns (you know, like you find in spreadsheets) and has a predictable format. Most accounting data is like this and the science of business over the last 30 years has been revolutionized by using high speed, high volume SQL databases to get insight from that structured data. However, what about other sciences? What about data that doesn’t come in convenient rows and columns? I’ve been looking recently at Human Genome data. This definitely doesn’t come into the category of “structured” data, although there is some element of structure to it. For example, human DNA can be expressed as a long sequence of 4 different letters (A, T, C and G). These letters are read in groups of three, and of the 64 possible combinations of these letters, 62 refer to making specific amino acids and the other two are used to indicate a space between larger groupings of these triplets called genes. The human genome can be represented by around 3 billion letters, which would equate to around 800 MB of data (unzipped), although the actual size of a file containing a human genome is much larger due to the way that sequencing is done and because of data quality issues. The science of physically “sequencing” these 3 billion letters from a sample of DNA is now well established. Unfortunately, the science of analyzing that data is rather less well understood. The good news is that anyone can acquire this data. The “1000 genomes project” data, for example, has been available in the AWS cloud for some time and consists of >200 terabytes for just 1700 participants. You can imagine the data volumes associated with more contemporary projects, such as the Million Human Genomes project. But while you can get your hands on genome data, how do you go about analyzing it? The problem is that there’s a lot of it, and it’s very difficult to interpret. Well, you certainly wouldn’t have to start from scratch; geneticists have written a number of libraries to calculate various common metrics of interest. For example, the “Pybedtools” library for Python allows you to identify genes that show a given genetic variation. You could become a Python developer and write a few million lines of code on a big server to make use of this library. Alternatively, you could use EXASOL’s in-memory analytic database (in the cloud or on your own servers) and import these genomic libraries so that you could build User Defined Functions around them. The upshot of this second approach is that you can run database queries that are “in-memory” and parallel and are therefore extraordinarily fast. You also have the benefit of being able to blend this “unstructured” genetic data with, for example, structured patient data and use the SQL language and mainstream business intelligence tools (such as Tableau) to give you great visualizations of the data without requiring lots of computer code. More and more, we are talking to organizations with data requirements that extend well beyond traditional accounting data. Genetics is a growing area of interest, but our system is designed, through the use of our User Defined Function framework, to support any kind of data at all. Why not have a look for yourself? You’ll be surprised at the kinds of analytics you can do with EXASOL.
<urn:uuid:e2038174-62e2-48eb-9c83-61df557d6015>
CC-MAIN-2017-04
http://www.exasol.com/en/blog/2015-08-12-gene-genie-genomic-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00514-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941888
775
2.953125
3
Cryptography researchers at MIT and Harvard have developed software called Sieve that is designed to help users keep track of encrypted personal data and better manage it in the cloud. The Web infrastructure concepts behind Sieve could have significant implications for government searches of data, such as in the Apple-FBI case, or for companies using personal data from fitness bands and other devices for marketing and other purposes. With Sieve, a Web user on a smartphone, smartwatch or other device could store personal data in encrypted form in the cloud, according to an MIT statement on Friday. Then, when any app wants to use specific data items, like a name or address, it would send a request to the user, and, if granted, would receive a secret key to decrypt only those items kept in the cloud account of the user. In addition, if the user wanted to revoke the app's access, Sieve would re-encrypt the data with a new key. The idea for Sieve first came more than a year ago to Frank Wang, a Ph.D candidate in computer science at MIT. Wang was using his Fitbit and was concerned about where his fitness data was stored and how it would be accessed by him or by others, he said in an interview. "I don't want people to hack my data and get more than I want," he said. "With Sieve, we want users to securely store and selectively access that data for Web services and Web apps. We want the data to remain secure and give subsets to Web services. In theory that's easy, but in practice, it's difficult," Wang said. "With Sieve, the user has more control over how his or her data flows to different parties." Wang spoke by phone just prior to giving a talk on Friday about Sieve at the Usenix Symposium on Networked Systems Design and Implementation on Santa Clara, Calif. Wang, 26, has worked to develop Sieve with MIT associate professors of electrical engineering and computer science Nickolai Zeldovich and Vinod Vaikuntanathan as well as James Mickens, associate professor of computer science at Harvard University. Apps used on everything from smart thermostats to smartphones "collect a lot of user data, and you don't know what the [app developer] will do with it," Wang said. "Our goal is to say it's the users' data, and they should say how it's used." He gave one practical example of how Sieve would work. If a sleep monitor has sleep data that is better than what a fitness band could provide, a user could permit the sleep data to be ported to the fitness band, which might give better tips on fitness than the sleep monitor would provide. "It makes it very easy with all the data in one location," Wang said. "Part of my motivation for Sieve was that fitness data may need to be regulated, since how different, really, is fitness data from medical records? " Wang said. "People can guess a lot about my health with a small amount of data." Concerns about uses of fitness data and other seemingly innocuous information have come to the attention of the Federal Trade Commission and other regulators. During an appearance at CES in January, FTC Chairwoman Edith Ramirez said that devices are "gathering increasingly sensitive information about us and how it is used or shared, and the potential for unintended uses is a concern." Ramirez said she was so personally concerned about sharing her own fitness data that she uses an older, unconnected pedometer to measure her steps. "I don't want to share," she said. Sieve could also better protect a person's data from a court-ordered warrant. If the FBI brought a search warrant to Facebook or Amazon for a person's data, the companies would be able to say that they don't have any of the user's important data. "If somebody told Amazon, give me all of Frank's data, Amazon can say, 'Ask Frank,' " Wang said. Wang is well aware of the FBI-Apple dispute in federal court over gaining access to a secure iPhone used in a terror attack. "Maybe Sieve would raise the hackles of the intelligence community, I don't know," Wang said. But Sieve could be a means to simplify things for users, he said. In another example, he said a user signing up with a new insurance company could give the insurer a specific key to access a subset of the user's personal data in the cloud. After the access was finished, the key would be changed to prevent further access. While part of the idea for Sieve came out of Wang's concerns over his personal data on Fitbit, it also came from the latest direction of study in the computer science field. "A lot of people in computer science are excited by users managing their own data, instead of Web services doing it," Wang said. "There's a lot of user distrust about using Web services and the cloud and finding some way to interact in a secure way," he said. "People are concerned about privacy and many don't know that Facebook and Fitbit have a lot of data on us." Wang received his undergraduate degree in computer science at Stanford University. He envisions three components for Sieve: software that a user installs on a device, software installed on apps and software installed in the cloud. "It would be great if Sieve was a product, but it's more of a model of a new Web infrastructure," he said. Meeting with tech companies and app developers will help determine the path forward for Sieve. "All of this is about making data access seamless for users," he said. "I hate the way we get data from Web services." This story, "MIT, Harvard researchers push new way for users to control access to personal data" was originally published by Computerworld.
<urn:uuid:56cf3b6c-6c47-4b69-aaae-3a28b2251830>
CC-MAIN-2017-04
http://www.itnews.com/article/3045974/encryption/mit-harvard-researchers-push-new-way-for-users-to-control-access-to-personal-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00450-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974734
1,208
2.671875
3
The rapid growth in analytics has been accompanied by an equally rapid growth in confusion about what analytics is, and what all of the terms used to describe it mean. I’ve spent 32 years working in analytics, and I still find some of the terminology confusing. I can, therefore, only imagine how confusing it can be to someone who just launched analytics into their business. So with that in mind, I’d like to try to clarify some of the key points of analytics confusion that I hear about most often when I meet with business people, from the C-level on down, and even from academics around the globe. Defining analytics, simply As a starter, I’d like to define analytics and the different types of analytics. What is analytics? Analytics is the application of mathematics and sometimes visualization to gain understanding and insights from data. And now for the different types of analytics: descriptive analytics, predictive analytics and advanced analytics. What are they and what do they really mean? The easiest term to understand is descriptive analytics. This is the simple stuff. It involves taking data and calculating numbers that quickly describe the data. As an example, think way back to elementary or middle school when we learned to take the average (AKA the mean) or find the median of numbers, such as finding the median of home prices, and so on. This is a form of descriptive analytics. Range is also another type of descriptive statistic you may be familiar with. Another way of describing data is by looking at the trends it expresses. This is not Twitter trends, which last only a few minutes, but trends like: what happened to wages over the past two years? Or the average cost of gasoline over the past five years -- are the prices going up or down? What do they look like over time? This is the easy analytics. Now let's look at some of the more confusing terms: predictive analytics and advanced analytics. The term predictive itself can be a source of confusion. That’s because people, understandably, think predictive means forecasting something. While forecasting is an application of predictive analytics, predictive analytics is used in many more ways that don't have anything to do with forecasting. In attempt to avoid this confusion, I prefer using the term advanced analytics. It connotes that the math and statistics are used to go beyond simply describing: it is more advanced and enables us to understand relationships between data. Advanced analytics can tell you, for example, when winter temperatures are 10 degrees Fahrenheit lower in North America, you will sell 12% more snow-blowers, shovels and so on, than you normally do. Or it can tell you that when people contact a call center to complain about their service, there is a 5% increase in probability that they will switch to another provider. Advanced analytics can also tell you that the way you drive your car or truck has shortened the useful life of its oil and you should have your vehicle serviced three weeks earlier than planned. When is a machine not a machine? Another common term I see causing confusion is machine learning. It’s confusing because machine learning actually has nothing to do with machines. Rather, machine learning is a mathematical or statistical approach that removes the human element from the modeling process. There are specialized mathematical and statistical techniques that can provide useful insights without the need for a statistician or mathematician to actively build the models. These models are able to take in data and produce results or forecasts without human involvement. They are also able to ‘learn’ over time, again without human assistance, so that as new data becomes available, they can adjust to it and generate results accordingly. For instance, the mathematical algorithms that generate recommendations for you when you are on a web site usually employ machine learning techniques, as they have to provide changing recommendations in real time. As our world becomes more connected and machine-to-machine interactions proliferate in the Internet of things, the machine learning analytical approaches, that don’t require human intervention, will also grow in importance. I’ve covered just a few of the most common terms that I see confusing businesses when they start to engage with the world of analytics. If you have some favorites that cause confusion, or you have some that you’d like someone to clarify, feel free to send them my way and I’ll shed some light on what can be confusing vocabulary.
<urn:uuid:3b1cff79-21ee-4a7e-9c51-199654eab455>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475445/business-intelligence/clarifying-analytics-terminology--removing-the-confusion-to-help-businesses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00358-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951039
897
2.515625
3
Live monthly webcasts for Inventors, Technologists, and Startups. Simple description of patents: how to turn an invention into a patent, how they protect your rights, why do you need them, and how much do they cost? Talk presented by Thad Gabara who holds over seventy patents and is licensed to prosecute patent applications before the United States Patent and Trademark Office. Teaching, Suggestion and Motivation (TSM) occured due to the Supreme Court case of Graham v. John Deere Co., 383 U.S. 1 (1966). A second Supreme court case called KSR concerns the issue of obviousness as applied to patent claims. An examiner can reject a claim based on common sense of a person having ordinary skill in the art (PHOSITA). Common sense is a perception and Voltaire has stated "Common sense is not so common." If you are interested in obtaining a patent, this talk is a must see. Claims are typically partitioned into apparatus, method and "means for" type language. Some of the mystery of reading claims is uncovered. Terms covered include : antecedent basis, negative limitation, etc. Time permitting a patent case may be presented. The basic fundamentals of patents are presented. The history, rights and types of patents are addressed. The required components of a patent application and topics that can not be patented are detailed. Patentability issues based on basic law and a description of 35 U.S.C. §101, §102, §103 and §112 are covered.
<urn:uuid:84e5b7bf-7979-4f66-934f-39c39645db57>
CC-MAIN-2017-04
https://www.brighttalk.com/channel/187/intellectual-property-patents
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935728
319
2.796875
3
IPv6 provides an address for each node on the Internet. So NAT has no more use for address depletion. NAT also provides some basic security. But NAT as any stateful device can also be the target for DoS attacks. And any NAT device within a network provides an easily exploited opportunity for undetected Man In The Middle (MITM) attacks. Only End-to-End security offers protection from exploits related with ARP or ND+MLD where End-to-End security is likely only possible with IPv6. This does not need to be IPsec. A RFC was written to explain how to get the security benefits provided by NAT without NAT. This is rfc4864 Local Node Protection Basically, NAT is not a security feature but it provides some basic security. Why? One reason is because PAT or NAPT is stateful and people think that it does provide security. NPTv6, which is not stateful, will not provide this security while on the other hand, NPTv6 provides address independency but still breaks some applications. - In IPv4 with NAT, when you don’t want that internal server to be visible from the outside, you just don’t configure a static translation with a Public address. - In IPv6 without NAT, when you don’t want that internal server to be visible from the outside, you don’t configure a Global Unique Address to this host but instead give it a Unique Local Address (ULA). The ULA are not routed to the outside on the Internet and you get exactly the same behavior. - In IPv4 with NAT, when you want an internal server to be reachable from the outside, you must provision a public address for this host and a static NAT translation for this host. - In IPv6 without NAT, when you want an internal server to be reachable from the outside, you must provision a Global Unique Address and that’s it! No need for a static translation! There are no more risks in IPv6 and its Global Unique Address without a NAT Static Translation. Because we need Security for IPv6, we still need to implement IPv6 Firewalls. If we use a router or hardware device-based stateful firewall, we may block incoming traffic not initiated from the outside and then we lose the end-to-end connectivity. A solution could be for these firewalls to allow incoming traffic while allowing traffic inspection such as DPI, IDS, Mail Guard or any feature to inspect the traffic on-the-flight so we may still be able to block any incoming attack before it has a chance to get in the network. This would be complemented by enabling the IPv6 Firewall feature which is provided in any Windows, MAC OS X or Linux/Unix OS. Another good document to study when you want to implement an IPv6 Firewall is the NSA “Firewall Design Considerations for IPv6” But recently, the IETF has provided useful recommendations for IPv6 Firewall: rfc6092 “Simple Security in IPv6 Gateway CPE” Basically this recommendation provides all the best practices, filtering rules to prevent spoofing, or block packets with Martian addresses or a multicast in the source address. rfc6092 “Simple Security in IPv6 Gateway CPE” also recommends to implement stateful firewalls which do not allow incoming traffic not initiated from the inside with the exception of IPSec traffic. So by default, IPSec incoming would be enabled. This is good enough to allow end-to-end connectivity. And the rfc6092 does not say that any other traffic but IPSec must be blocked. It is still possible to allow some important applications if needed for peer-to-peer connectivity. Also, by providing a unique address to each node, IPv6 will restore the end-to-end connectivity while it will be more end-to-end “address-ability” as no one would accept end-to-end connectivity for any traffic at all time between any node! The rfc6092 does not say that any other traffic but IPSec must be blocked. It is still possible to allow some important applications if needed for peer-to-peer connectivity. Now, which router-based or hardware-based Firewalls to use ? The choice of is getting larger and larger with: There is a basic CISCO IOS Firewall for IPv6 CISCO IOS has an interesting zone based firewall. The CISCO PIX have been replaced by ASA FORTIGATE from FORTINETalso supports IPv6 ip6tables on Linux Fred BOVY, CCIE #3013 Fast Lane’s Resident IPv6 Expert
<urn:uuid:213d8376-c975-435c-b673-e8166900c7bf>
CC-MAIN-2017-04
http://www.fastlaneus.com/blog/2011/09/26/recommendation-for-ipv6-stateful-firewalls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00570-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90151
998
2.78125
3
Archaeologists have used the term defense-in-depth for decades to describe the obstacles erected to thwart attacks. At Dun Aengus, the most spectacular layer of defense was a band of chevaux-de-frise Viking marauders running up a hill to take a fort would have to survive a series of defenses, arrayed in sequence: berms, ditches, outer walls, chevaux-de-frise, more ditches, walls, palisades (tall, spiky wooden fences) and more walls. Infosecurity professionals practice some defense-in-depth, but a key lesson from Dun Aengus is the variety of defenses. Today, several firewalls might equal several layers of security, but that's only one kind of defense repeated. Bronze Age architects made sure different tools and skills would be required at every stop to slow down an attack and therefore improve the ability to counterattack.
<urn:uuid:6d8b8d15-606e-44d8-ab9b-baa6232cdba3>
CC-MAIN-2017-04
http://www.csoonline.com/article/2118779/network-security/bronze-age-lessons--practice-defense-in-depth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00294-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947881
195
3.609375
4
NPR – 11/2/16 The Internet can be a dangerous place. Hackers, bots and viruses are prowling the Web trying to turn your machines into zombies. Last month, a massive network of hacked devices helped temporarily shut down Twitter and other websites. Hackers used a virus called Mirai to target Dyn, a major Internet infrastructure company, in a sophisticated denial-of-service attack — when insecure Internet-connected devices are directed to barrage a target with data until it shuts down. Andrew McGill, a reporter at The Atlantic, devised an experiment to find out how vulnerable our devices are to hackers. He built a virtual Internet-connected toaster, put it online and waited to see how quickly it would take for hackers to attempt to breach it. They found him much faster than he expected.
<urn:uuid:5645cd54-2427-4cea-b09f-0fcec04ddb6d>
CC-MAIN-2017-04
https://www.ca.com/us/company/newsroom/news-articles/an-experiment-shows-how-quickly-the-internet-of-things-can-be-hacked.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903266
164
2.546875
3
The year is 2033. A bleary-eyed woman heaves her suitcase from a shiny baggage carousel in Los Angeles International Airport. Rather than hailing a cab, she steps into a white, streamlined pod attached to an overhead rail. The woman swipes her credit card and uses a touchscreen to indicate her destination, a pod stop near her home in Beverly Hills. The door closes, and her pod accelerates to match the speed of traffic as it merges onto the main guideway at 114 mph. The woman finishes her salad and plays Sudoku on her smartphone as the pod glides along silently. She’ll be home soon. This is the kind of transportation experience being promised by personal rapid transit (PRT) or automated transit network companies. Traffic on PRT networks is controlled by a central computer, so collisions, traffic and even the stress of possibly getting lost could all theoretically be eliminated. PRTs can be powered by clean energy, and some proposed systems use magnetic levitation to connect vehicles to a high-speed guideway, eliminating pollution, noise and the usual wear caused by moving parts. A PRT network that spanned a large metropolitan region could solve most problems of existing public and private transportation. Everything about PRT smacks of the future, but the technology’s here today. Or at least it exists — although the networks haven’t been built. There are only a few small PRT networks in operation worldwide, and there are several more under way, but none of them embody the potential of what proponents of PRT say the technology can offer. The oldest PRT network began operation in 1975 in Morgantown, W.Va., but a bystander could easily mistake the system for something other than futuristic. The Morgantown PRT network is neither personal nor rapid — each car holds about 20 passengers, the system runs on a ground-mounted rail, and the top speed is 30 mph. To be fair, technology has come a long way in the past 40 years, and besides, many today still consider the Morgantown PRT network a great demonstration of the reliability, safety and improved service that PRT networks could offer. There are two other PRT networks in operation — one designed by 2getthere in Masdar City, a planned city in the United Arab Emirates, which would be the first zero-carbon, zero-waste, car-free city in the world. The other was completed by ULTra for London’s Heathrow Airport in 2011. And while both systems have been lauded as excellent demonstrations of PRT’s viability, neither system realizes the technology’s full potential. These networks have only a few stations, cover only a few miles, and have only 13 and 21 cars, respectively. Likewise, a PRT system being built now in Suncheon, South Korea, is of similar size and not intended to displace local taxi or bus services, but rather serve as a way to transport guests to the city’s upcoming garden expo. Existing PRT networks have very specialized functions, said William Millar, former president of the American Public Transportation Association. For the technology to progress beyond airports and pilot cities, he said, something must change. Factors like rising fuel prices, growing populations and increased public concern about global warming all are forcing transportation methods toward greater efficiency, Millar said. But while PRT offers a solution for these problems, PRT’s track record makes Millar doubt that it will become a prevalent technology in the future. “Personal rapid transit is an idea whose time heretofore has never come. It has been tried in many different forms, in many different ways, and it has not yet found much of a market other than specialized circumstances.” Perhaps the biggest hurdle for PRT is a social one. “Often these are services that are elevated in the air in urban areas, and people don’t want that,” Millar said. “It won’t be the technology that holds this back. It will be the institutional design concerns that people have.” Communities often view such proposals as eyesores and raise concerns about noise and other problems. Even if the system is actually small and silent, as is the case in many of the new systems being developed, it won’t matter, he said — overhead transit systems like elevated trains have conditioned the public to oppose PRT, even if their reality is totally different than what the public thinks it is. The case still must be made that PRT is superior to existing transportation options, Millar said. “What you’re willing to do on your vacation in Disney World,” he said, “is not necessarily what you’re willing to do when you’re back home in the work-a-day world.” Others aren’t so pessimistic. Several companies are striving to replace the car, bus and train with citywide networks of fast-moving pods. SkyTran, based in Mountain View, Calif., at the NASA Ames Research Center, is one such firm. The company is starting locally, petitioning the Federal Transit Administration (FTA) for approval to build a PRT system in Mountain View. But SkyTran also is aiming its high-speed system at large cities worldwide. The firm recently contracted with Tel Aviv, Israel, to build a line that could eventually be expanded throughout the city if the pilot is deemed successful. Chris Perkins, vice president of government affairs for SkyTran, said the benefits of PRT are compelling enough to sway public sentiment and revolutionize transportation. “It doesn’t have wheels [or] a diesel motor. It doesn’t belch fumes,” Perkins said. “It’s essentially an all-electric, linear motor system that uses magnetic levitation so there’s no wheel contact. There’s no wheel noise as the vehicle is traversing our guideways.” What SkyTran’s PRT offers, he said, is “point-to-point, nonstop, on-demand service, which is just like a car,” except better. “It’s the kind of car we all want,” Perkins said. “You get in the thing, you go where you want to go and then it just disappears. No possible parking tickets to pay, no insurance.” PRT networks like the ones SkyTran is proposing could replace trains, buses and even personal vehicles, to an extent. A single guideway is equivalent to three lanes of traffic, Perkins said, yielding potentially 11,000 passengers per hour. The cost to the consumer is about half that of owning a car, he said, and would cost a city about $9 million per mile to build, which is much cheaper than the cost of most traditional transportation infrastructure. The low cost and small footprint of PRTs are two reasons the technology is starting to replace automated people movers in airports, as it did in London’s Heathrow, and why a city like Tel Aviv will try it. Perkins agreed with Millar that concerns about how PRT will look and sound are the most common worries from prospective clients. But he said PRT systems won’t necessarily mar the nation’s cityscapes. In fact, these systems could help clean up urban visual blight by integrating power lines and light and traffic poles into the guideway structure. SkyTran’s design is modular, so it could be mass produced easily, he added. “It can be built in factories, then shipped to the job site and assembled like Tinkertoys.” SkyTran’s PRT relies on magnetic levitation, but in 2009, the FTA released a report citing key challenges around the use of magnetic levitation in urban areas. Obtaining right of way, meeting safety standards and traveling at speeds lower than what’s normally used for magnetic levitation were all key concerns cited by the report. However, the federal government has spent more than $250 million researching magnetic levitation, including a four-year research grant completed by SkyTran in 2010, and they’ve come a long way in providing solutions to all of those challenges, Perkins said. But so far, SkyTran’s projects have been relegated to the space already staked out by PRT in the past. The company’s vision of fast-moving citywide networks that can replace lanes of freeway traffic and branch out to every neighborhood is still out of reach. The Colorado Department of Transportation is evaluating whether SkyTran should build a guideway between Denver International Airport and nearby ski resorts. That system, if built, could possibly someday extend into a larger system used for more general transport, Perkins said. But for now, governments don’t see PRT as a main mode of transportation that will replace cars or buses anytime soon. Last year, San Jose, Calif., conducted studies examining the possible use of a PRT network for transporting people both within San Jose International Airport and connecting them with nearby train and light rail systems. The reports found that PRT has potential, beating out shuttle buses and automated people movers in terms of cost and service provided to the user, but the studies also recommended that San Jose tread carefully. PRT shows promise, said Laura Stuchinsky, sustainability officer for the San Jose Department of Transportation, but many unproven components must be worked out before the city invests. PRT proponents often point to existing systems in Heathrow Airport or Morgantown as proof that the technology works, and to some degree, those statements are fair. The systems have proven safe and reliable, but those small systems don’t eliminate concerns that PRT may not scale well, nor do they provide any comfort regarding the lack of a regulatory framework that governments can comfortably work within. “We think there’s great utility, but there’s not a system proven yet that can handle large crowds like what would be coming off a high-speed rail station or out of our airport during peak periods,” Stuchinsky said. “We’d love to see these systems built in the U.S.” But there isn’t yet sufficient proof that such a system would work well enough for San Jose to build a limited airport network, let alone a wider network to serve the entire city, she said. “We also don’t think automated transit networks [will] replace existing transit. We think the best initial applications, and there may be other applications, are weaving together existing systems,” Stuchinsky said. San Jose’s 2040 General Plan includes the development of urban villages, mixed residential and urban areas that would let people walk or bike to work while enjoying many of the benefits of a large urban center. PRT could connect such villages to larger transit networks, or cross barriers like rivers or freeways. PRT could deliver travelers along the final leg of their journeys, Stuchinsky said, connecting major transit hubs with common destinations. Much as NASA or the U.S. military hires independent agencies to test the viability of a rocket or missile, she said, governments nationwide could benefit from a federal program that demonstrated PRT as a low-risk venture for both the government and private companies planning to build these networks. Few cities want to be a guinea pig for large-scale PRT deployment though, especially if they’re picking up the tab. Peter Muller, president of PRT Consulting, agreed that the federal government should help the technology get a foothold in the U.S. Public opinion polls measuring PRT users’ satisfaction, Muller said, have shown that the technology is safer and provides better service than all other forms of public transport. The public loves PRT when it’s available — people just haven’t been given a chance to embrace it. Part of the problem, Muller said, is that the U.S. government still hasn’t gotten over its hurtful breakup with PRT in the 1970s. The Morgantown system is considered by many today to be safe and reliable, but the network didn’t start that way. While still under development, the project was altered from an initial plan of a true PRT system to a PRT-group transit hybrid. To complicate things further, the project was rushed to completion by Richard Nixon, who wanted to see it finished in time for his run for a second presidential term. On the day the system was unveiled, Tricia Nixon, the president’s daughter, rode the first vehicle until it promptly jammed in the middle of the track, stranding her. “It took them three years to get it to work,” Muller said. “They had to start all over. It overran the budget, it overran the schedule, it embarrassed Tricia Nixon. It was a disaster.” Since then, PRT hasn’t gotten a fair shake from the feds or regional transit agencies, Muller said. “The way public transportation is funded is a huge part of the reason,” he said. “They don’t encourage innovation. They don’t encourage people to fail and then learn from their failures. If you try something new and it fails, heads have to roll. This doesn’t encourage people to try something new.” Another problem is that the way the FTA evaluates transit systems tends to undervalue PRT, Muller said. “They do what are called corridor studies. Well, PRT is a network system. It’s designed to operate in a network. It can work in a corridor, but that’s not its biggest advantage.” Concerns that PRT may not be scalable are legitimate, Muller said, but the degree of skepticism and fear surrounding the technology is unrealistic. “The federal government should step up and fund a demonstration program with PRT and either prove or disprove its scalability, so cities that want to do it can feel some comfort that this is a system that’s been proven to really work,” he said. Cities like San Jose or New Jersey, which also looked at the technology in 2007, concluded the same thing — PRT has been around for decades, but the research and development required to demonstrate the viability of a large network is beyond most city and state governments. Transportation is about improving quality of life for as many people as possible, Muller said. “If the government doesn’t [help], people are going to have to carefully tiptoe into this technology, and it will take a long time for it to become ubiquitous and that’s just a shame. It can solve many of our problems.”
<urn:uuid:6dcede2a-1c09-44fc-9781-022b1e45a6f7>
CC-MAIN-2017-04
http://www.govtech.com/Personal-Rapid-Transit-Revival.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958549
3,070
2.703125
3
Last week we brought up the question, “How did end users learn to expect fast websites?” We covered how Pavlov discovered Conditioning through experiments with his dog, which left us wondering; perhaps humans are conditioned to expect instant web gratification. Almost 100 years after Pavlov, Wolfram Schultz, now at Cambridge, stuck probes into the brains of rats and began to quantify the conditioning phenomenon in terms of neuronal activation. Dopamine (DA) is a neurotransmitter in our brain commonly coupled with reward. Dopamine is released while you are eating an ice-cream sundae, when you win a prize, even during sex. Cocaine, nicotine, and other amphetamines all cause increases in dopamine transmission as well. Gamblers and other addicts all experience increased DA levels right before and during their favorite activities. Shultz discovered deep down in our midbrain regions (the areas associated with reward), there are certain patterned firings of a population of Dopamine neurons to signal both reward and reward expectation. His lab showed that once a rat has learned association between a stimulus and reward – the activation of its midbrain dopamine neurons follow the same kind of conditioned behavior as well. This graph shows normal baseline activity of rat Dopamine neurons. At some point, R, a reward (sugar water) is given to the rat. The probe in the rat’s brain detects a spike of activity immediately following the reward. After conditioning the rat to learn that a flashing light signals sugar-water, the burst of activation shifts. It moves from immediately following the sugar-water reward to immediately following the Conditioned Stimulus (CS) – the flashing light. At this point, the light itself is the perceived reward. Now, the actual reward is “guaranteed” to follow (or at least that is what the rat has learned). This is the underlying mechanism that caused Pavlov’s dog to salivate at the bell. The dopamine response to the stimulus has already triggered a downstream chain of physiological reactions to get prepared for food. * It should be noted that the same amount of time passed on every conditioning run between the CS and the reward. Here is where Performance Engineers and Ecommerce Directors should start shaking in their boots. When Shultz withheld the sugar after flashing the signal light, there was a drop in activity immediately following the exact moment the sugar-water should have been delivered. This temporarily precise drop is commonly referred to as the Dopamine Reward Prediction Error. Shultz also found that delaying the reward by 500 ms, 1000 ms, and 2000 ms also caused the same depression in activity (right after the reward was supposed to be delivered) followed by a new spike in activity once the reward was delivered. So the rat’s neurons can also detect slight latencies in the delivery of their reward as well. These two studies might just show that unconsciously, our brains have conditionally learned when a webpage must load and we possibly experience a burst of dopamenergic neuronal activity after the pressing the GO button. Does this mean that there is a dip in activity when the website takes longer than we expect? Who knows what cascading set of events the drop in Dopamine activity trigger. Perhaps its web stress, perhaps its frustration, perhaps it’s like an addict being denied their drug fix. If the user expectations are determined by Conditional Learning, where did we learn that web sites are supposed to be 3 second or faster and why is the expectation changing? I suspect that the most popular sites we use the most are driving this learning behavior! The Googles, Yahoos and Amazons of the world, who have the technology, infrastructure, investment, and man power to continue to make their websites faster, do so and spend great effort leading the way. Additionally, the web has changed completely since dial-up and webpages are now getting delivered at speeds of 1 Mbps to 100 Mbps. Another remaining question is how much impact does other knowledge/outside influence have in this learning. Is querying for a keyword in a search engine a different signal than clicking “Submit Order” to process your credit card on an ecommerce site? (Most of us don’t mind that it takes a little longer.) Behind the scenes, our brain has learned how fast a page should load and can detect slow ones – even a delay as little as 250 ms. There are still a lot of open questions regarding what factors are involved in this learning behavior, how much learning from one site transfers to another, what is the impact over time, and what are repercussions of a negative experience. We clearly need for more scientific research in this area to understand our new digital lives and how we can trick the mind to overcoming these speed trap pitfalls.
<urn:uuid:ae8f688e-f52c-4dd4-9904-073c016c0451>
CC-MAIN-2017-04
http://blog.catchpoint.com/2012/09/19/why-we-expect-fast-websites-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00322-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962124
983
2.765625
3
A new invention uses biomimicry to make inkjet printers more efficient. An engineering professor from the University of Missouri invented a mechanism for inkjet nozzles that keeps the surface wet. “The nozzle cover we invented was inspired by the human eye,” said Jae Wan Kwon, associate professor of engineering at the University of Missouri. “The eye and an inkjet nozzle have a common problem: They must not be allowed to dry while, simultaneously, they must open.” The nozzle cover uses a droplet of silicone oil, which is moved in and out of place with an electric field, to keep the nozzle tip wet. Ordinary inkjet printers must clear dried ink from their nozzles regularly to maintain operation, which can be wasteful. This invention solves that problem. The clog-free nozzle could save homes and businesses money, say the researchers who worked on the project. An academic paper explaining the invention was published in the Journal of Microelectromechanical Systems and can be found here. [See 13 more examples of biomimicry in this Treehugger.com slideshow of technologies inspired by nature.]
<urn:uuid:023f3d3d-2680-443d-89fb-d3c9ccbf4481>
CC-MAIN-2017-04
http://www.govtech.com/technology/Clog-Free-Printer-Mimics-Human-Eye.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00322-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946172
238
3.75
4
|This article was translated and revised by Patrick Vacek.| The threat of kleptography has been discussed since the mid-1990s , but it was not until recently that it’s received significant attention. This may be in part due to increasingly common discussions about the potential threats from this kind of highly sophisticated attack, but it is more likely due to several recent documented implementations of kleptographic attacks. These examples show that one of the fundamental aspects of black-box cryptography is also a prime target for intrusion. Hardware security modules (HSMs), smart cards, and Trusted Platform Modules (TPMs) claim to secure cryptographic keys from all external access. However, these devices leave the user with an element of uncertainty: How can the user be sure that the black box is doing exactly what it claims to – and nothing more? If it’s impossible to externally determine whether a cryptosystem has been manipulated, and the generated output does not appear suspect, one must ultimately place his or her trust in the manufacturer of a cryptosystem to know that no such manipulation has occurred. How can one be sure that the schematics and designs that a manufacturer may provide truly correspond to the finished product? Furthermore, it does not require a conspiracy theorist to imagine the influence that government organizations could have on certain manufacturers. How can we really make sure that we know what is happening inside the black boxes we use? Kleptography is the "study of stealing information securely and subliminally" . A kleptographic attack is an attack in which a malicious developer uses asymmetric encryption to implement a cryptographic back door. In this way, cryptography is employed against cryptography: the back door in question is not an additional channel of communication to the world outside the cryptosystem, nor does it require the transmission of additional data. The back door is instead embedded directly within the intended communication. Thus, kleptography is a subfield of cryptovirology, the application of cryptography in malware [3,6,11]. However, the target of a kleptographic attack is not just any general form of software, but rather the specific environment of a cryptosystem. The following example describes a possible kleptographic attack. A black box generates asymmetric key pairs, of which one is a private key and the other public. The private key, which is used in decryption and digital signature generation, should remain exclusively inside the black box to prevent improper usage and duplication. The public key, however, can be freely distributed. As is generally accepted, no one can derive the private key from the public key – or can they? In fact, such a derivation is indeed possible if the key generation process has been manipulated in a particular fashion. A cryptographic back door could be embedded in the process of manufacturing the cryptosystem, which would then provide an attacker access to the private key without drawing the attention of a third party. This is possible because the generated public keys will not appear conspicuous, nor will any unexpected communication or errors arise while using the cryptographic functionality. The impact is extreme. With a copy of the private key, the attacker can counterfeit signatures and decrypt secret data, even though the cryptographic keys were generated in a sealed black box and no unauthorized access to the contents was permitted. A simplified version of this attack could manipulate the random number generator within the cryptosystem (see Figure 1) such that the key generation process would use a pseudo-random number generator and incorporate a seed known to the attacker, instead of using a truly random function. By knowing the prime numbers generated in the cryptosystem, the attacker could produce a copy of the secret key outside the black box. A manipulation of this sort could be revealed through reverse engineering, as long as this would not be prevented by a security mechanism, such as those found in dedicated cryptographic hardware. Because the seed of the pseudo-random number generator is fixed within the source code, a reverse engineer could then also calculate the private key outside of the black box. From the attacker's perspective, it is desirable to have exclusive access to the attack mechanism, which would thus no longer be the case. More sophisticated kleptographic attacks can indeed prevent reverse engineers from making this kind of discovery. Kleptography was first discussed in 1996 at the CRYPTO conference by Adam Young and Moti Yung, in which they called attention to numerous opportunities for attacks against the cryptography of black-box systems . They introduced the concept of a "Secretly Embedded Trapdoor with Universal Protection" (SETUP), an attack which they described with respect to RSA key generation. The distinguishing feature of this attack is that it can only be detected through reverse engineering (if at all), and that if found, it still cannot be used by the discoverer. The reverse engineer can only find the public key of the attacker, but not the private key. Because this attack is itself based on asymmetric cryptography, it can be considered ‘secure’ – from the attacker's perspective. This and other kleptographic attacks have been implemented in the e-learning tool JCrypTool (see Figure 2). Over the years, the SETUP attack has been developed further. The first attacks targeted algorithms based on the difficulty of factoring primes (e.g., RSA), but algorithm attacks relying on the discreet-logarithm problem soon followed. In 2002, a powerful attack against the Diffie-Hellman key exchange algorithm was published . At the 26th Chaos Communication Congress (26C3) in December 2009, Moti Yung alluded to these techniques in his presentation "Yes We Can't!" , in which he stressed that kleptography restricts trust in the manufacturer. So why has kleptography been slow to garner significant attention, despite hardware cryptosystems being employed precisely where the greatest demands for security exist? After all, HSMs are used specifically to protect particularly sensitive infrastructural keys for businesses in which the potential financial and reputational damage of a successful attack would be extremely high. Perhaps the fundamental problem is not well-enough recognized, or perhaps users assume that it is in the manufacturer's best interest not to sell manipulated hardware. If the whistle were to be blown on such an incident, the damage could be enough to ruin a company. There are certainly techniques to counteract kleptography. The European Union requires that security-related industrial hardware must be independently evaluated in two different EU states to achieve maximal transparency over the entire production process. An even more basic approach is to simply combine hardware from multiple manufacturers into one system. For example, a company could use two smart cards from different and independent manufacturers and then encrypt all data twice, a form of cascading or multiple encryption . In such a case, even if a manufacturer had manipulated the key generation process, it would be unable to decrypt the data, because it would not have the private key of the other smart card. But even with both of these approaches, one can merely reduce the risk. These techniques do not guarantee that manipulation has not occurred, and their increased complexity makes them rather impractical. Because kleptography requires the use of a subliminal channel to extricate information without detection from a black box, another logical idea is to eliminate all such possible subliminal channels. This line of thought was first pursued in 1984 by Gus Simmons and continued in further publications [8,9], in which random numbers were built into a sort of authentication protocol. Another technique was introduced in 2002 in which a third party can verify the RSA key generation process . This process is a type of distributed key generation, in which the private key is only known to the black box, thus safeguarding that the key generation was not manipulated and the key cannot be revealed through a kleptographic attack. Other concrete attempts to combat the threat of kleptography have followed, and researchers continue to search for further possibilities. Kleptography is richly interesting in terms of cryptography, but in practice it is just one of many threats in a complete system. Rather than manipulate the cryptographic implementation itself, at present it is usually easier for an attacker to assault other components of the system, such as its endpoints. For example, an attacker could use a trojan to capture confidential data from a PC before it was even encrypted by a smart card. A system is always just as secure as the weakest link in the chain. Kleptography and its defenses will likely become much more significant in the future as security breaches in other components become better recognized and restricted. It should be considered, however, that in situations that demand the highest security, the expense of implementing countermeasures against kleptography are probably already worth the cost. Even in strongly regulated environments, and even with cross-checking and rigorous testing, the final product can still contain a back door or may be vulnerable to some form of intrusion. After all, it is not necessarily the manufacturer that may construct a back door to a cryptosystem. For the sake of comparison and clarification, the following demonstrates the process of normal RSA key generation with 2048-bit keys: This process produces the public key (n, e) and the private key (d). The encryption and decryption of a message m with RSA is carried out via exponentiation: The kleptographic RSA key generation process (presented here in a simplified version from ) is modified and uses the public key of the attacker's own RSA key pair (N, E). Note that the attacker's key is half as long (in this case, 1024 bits) as the keys under attack. The result is again a public key (n, e) and a private key (d). The discovery of the private key is only possible with possession of the public key (n, e) created by the kleptographic RSA key generation as well as the private key of the attacker (D): As a result, the attacker acquires the private key (d) of the victim. Professor Bernhard Esslinger teaches IT security and cryptography at the University of Siegen and leads the open-source CrypTool project. Patrick Vacek is a software developer for Exegy, Inc. in St. Louis and is also a contributor to the CrypTool project. |This article was originally published by <kes> – The Information Security Journal and has been translated from the original German| A. Young, M. Yung, The Dark Side of Black-Box Cryptography, or: Should we trust Capstone?, in: N. Koblitz (Ed.), Advances in Cryptology – Crypto ’96, LNCS 1109, Springer, 1996, ISBN 978-3-540- A. Young, M. Yung, Kleptography: Using Cryptography Against Cryptography, in: W. Fumy (Ed.), Advances in Cryptology – Eurocrypt ’97, LNCS 1233, Springer, 1997, ISBN 978-3-540-62975-7 A. Young, M. Yung, Cryptovirology FAQ, Version 1.31, http://www.cryptovirology.com/cryptovfiles/cryptovirologyfaqver1.html M. Yung, Kleptography: The Outsider Inside Your Crypto Devices, and its Trust Implications, DIMACS Workshop on Theft in E-Commerce: Content, Identity, and Service, 2005, Powerpoint Presentation: http://dimacs.rutgers.edu/Workshops/Intellectual/slides/yung.ppt M. Yung, Yes We Can’t, 26th Chaos Communication Congress, 2009, MPEG-4 video recording: http://events.ccc.de/congress/2009/Fahrplan/events/3702.en.html A. Young, M. Yung, Malicious Cryptography: Exposing Cryptovirology, John Wiley & Sons, 2004, G. J. Simmons, The Prisoners' Problem and the Subliminal Channel, in: D. Chaum (Ed.), Proceedings of Crypto ’83, Plenum Press, 1984, ISBN 978-0-306-41637-8 G. J. Simmons, The Subliminal Channel and Digital Signatures, in: T. Beth, N. Cot, I. Ingemarsson (Eds.), Proceedings of Eurocrypt ’84, LNCS 209, Springer-Verlag, 1985, ISBN 978-3-540-16076-2 Y. Desmedt, C. Goutier, S. Bengio, Special Uses and Abuses of the Fiat-Shamir Passport Protocol, in: C. Pomerance (Ed.), Proceedings of Crypto ’87, LNCS 293, Springer-Verlag, 1988, ISBN 978-3-540-18796-7 A. Juels, J. Guajardo, RSA Key Generation with Verifiable Randomness, in: D. Naccache, P. Pallier (Eds.), Public Key Cryptography: 4th International Workshop on Practice and Theory in Public Key Cryptosystems, Springer-Verlag, 2002, ISBN 978-3-540-43168-8 A. Yung, M. Young, Towards a Book on Advances in Cryptovirology, Selected Chapters as PDF, www.cryptovirology.com/cryptovfiles/newbook.html
<urn:uuid:097e7d28-1223-4f1b-9063-10970982aa80>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/magazine-features/the-dark-side-of-cryptography-kleptography-in/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917126
2,791
2.8125
3
Over 430 million new pieces of malware were discovered last year, a 36% increase from the previous year (according to Symantec). Malware attacks are projected to rise in volume and frequency. Hackers are becoming more skilled at detecting vulnerabilities and commonly use malware as their method of attack. It is critical to be aware of the current malware threats and learn how you can defuse potential exploits. O-checker: Detection of Malicious Documents Through Deviation from File Format Specifications describes a powerful tool, o-checker, that specializes in identifying documents containing malware-infected executable files. O-checker detected 96.1% of malicious files hidden in targeted email attacks in 2013 and 2014. Targeted emails attacks normally inject malware in various document formats. This talk will examine the techniques used for hiding infected files and discloses why o-checker is projected to maintain a high malware detection rate. Next-Generation of Exploit Kit Detection by Building Simulated Obfuscators reveals that exploit-kits are driving epidemic levels of malware delivery. Each exploit-kit has a obfuscator, which transforms malicious code to obfuscator code to bypass firewall detection. Many researchers examine the obfuscated page instead of the actual obfuscator since purchasing an obfuscator that was utilized by an exploit-kit is incredibly expensive. This Briefing will introduce a cost-effective method of building simulated obfuscators to conduct in-depth examinations and reduce malware attacks. An AI Approach to Malware Similarity Analysis: Mapping the Malware Genome With a Deep Neural Network introduces a new method of detecting malware codes, which is easier to manage and more efficient than traditional systems. Standard malware detection systems require constant, manual effort in adjusting the formula to identify malware similarities. This new malware detection approach significantly reduces manual adjustments in the formula and is the first to use deep neural networks for code sharing identification. This talk will explain how the new malware detection approach operates and provides examples of its improved accuracy. If you’re interested in a hands-on experience detecting malware, Hunting Malware Across the Enterprise teaches students how to track malware without having an obvious starting point. This nearly sold out Training will dive deep into the threat landscape, indicators of compromise, and scripting--which will help in your search for malware. If you want to take a highly-technical course that challenges malware defense mechanisms, check out Advanced Malware Analysis. This Training teaches students how to combat anti-disassembly, anti-debugging and anti-virtual machine techniques. To stay up-to-date with the latest information security research, take a look at the Briefings and Trainings we’ve lined up for Black Hat USA 2016. We hope you join us at Mandalay Bay in Las Vegas, Nevada, July 30-August 4 for the biggest week in InfoSec.
<urn:uuid:df30290a-c785-41f4-9fdc-d6de8c15398b>
CC-MAIN-2017-04
https://www.blackhat.com/latestintel/06302016-beware-of-malware.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00103-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899467
574
2.625
3
My oh my, we've thrown a lot of terms in to the mix. These days when you read magazine articles or you look through your local friendly blogger :) you find a slew of these terms used. Maybe it's time to refresh our memory on exactly what these terms mean. Why? Because they are pertinent to MDM and MMDM (master metadata data management). So read on... These definitions have been pulled from http://www.websters.com 1. The classification of organisms in an ordered system that indicates natural relationships. 2. The science, laws, or principles of classification; systematics. 3. Division into ordered groups or categories: “Scholars have been laboring to develop a taxonomy of young killers” (Aric Press). 1. The act, process, or result of classifying. 2. A category or class. 3. Biology. The systematic grouping of organisms into categories on the basis of evolutionary or structural relationships between them; taxonomy. 1. The branch of metaphysics that deals with the nature of being. 1. The act of registering; registration. 2. The registered nationality of a ship. 3. A place for registering. -- A book for official records. -- The place where such records are kept. Ok, what does this have to do with Master Data or Metadata or BI for that matter? The industry is throwing the terms around too loosely. Registries are being used for Metadata, as such they should be - at the bottom level of a Taxonomy is a registry. The first step to successful enterprise Metadata Management or governance is getting a handle on the Taxonomy of the business and the metadata used within the business. This is critical to identifying and governing specific components of the MDM strategy. Taxonomies should be utilized to manage, govern, and view (visualize) the metadata from an enterprise perspective. However, the act of building a metadata management solution, or a Master Data Management solution requires the implementation of a classification with a registry or set of registries underneath. It is vital that we all speak the same language here and not get confused. Some of my blog entries I've discussed the possibilities of VISUALIZING data sets, well guess what? An EASY way to visualize huge metadata collections is to use a Tree classification as the implementation side of the taxonomy. The registries are at the leaves in the trees and provide further drill down, but have nothing to do with the visualization. Wait a minute, I can see this for Metadata, but how does that help my MDM effort? Well, as I've blogged before - Metadata or Master Metadata Management needs to be a part of EVERY MDM initiative out there. Why? Because it provides the CONTEXT to understanding our Master Data. How it's used, where it's used, when it should / should not be used, and what the elements mean at varying levels within the organization. Master Metadata (at a very simplistic viewpoint) really is a data-driven taxonomy (representation) of the BUSINESS. Without tying our master data back to the business it will lose value quickly within the company, and eventually end up where all master systems end-up... in the sunset on the horizon... Questions? Thoughts? Haiku's? Incantations? I'll take them all, let me know what you think...
<urn:uuid:d37106e6-556f-4e89-bc1b-45f5db730cb8>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/linstedt/archives/2006/06/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901222
713
2.53125
3
Technology is ever changing and very few areas are stable. This makes information technology interesting, but challenging. T-SQL or Transact-SQL is one area that is stable. There is also a vast use of T-SQL and many career options. T-SQL is the combination of standard SQL as well as the proprietary adds-ons for Microsoft that include functions, stored procedures and other elements of the language. SQL or the Structured Query Language is the programming language used to inquire, create, control and manipulate objects in a relational database. It is also used for administering the database. SQL is both an American National Standards Institute (ANSI) and International Standards Organization (ISO) standard. SQL statements are used to perform tasks such as adding data, making data modifications, creating objects, performing maintenance tasks and retrieving data from a database. Most analysis and business decisions are made as a result of querying and understanding data in the databases. Databases hold core information that allow businesses to function. Relational database management systems use SQL. These include platforms like Oracle, DB2, Sybase, Microsoft SQL Server, and Access as well as others. Typically, each database platform follows the ANSI / ISO standards and then also has features in the language that are proprietary. This is true of Microsoft SQL Server and the full language is called T-SQL. The place to begin learning T-SQL depends on the task at hand. Here are some example roles and starting points to jump start your career in each role. Report Writer or Analyst - Focus on the Select statement. Report writers and analysts must know how to ask questions (i.e. query the database). Global Knowledge course Querying Microsoft SQL Server 2014 (M20461) or Querying Data with Transact-SQL (M20761) are good starting points. - Learn how to turn your queries into stored procedures. For example, if you have a query “Select last, first from customer” you can turn it into a stored procedure of “Create procedure usp_GetCustInfo as Select last, first from customer”. Then, you only need to call the procedure from the report “Execute usp_GetCustInfo”. Stored procedures provide performance and security benefits when querying the database. - Spend time learning SQL Server Reporting Services. This tool will allow you to graphically produce reports and utilize the Select statements and stored procedures you create in T-SQL. Global Knowledge course Implementing Data Models and Reports with Microsoft SQL Server 2014 (M20466) is an excellent course dealing with reporting services as well as analysis services. - Learn these T-SQL statements: Insert, Update and Delete. These allow data entry and modification. - Focus on the Select statement. Although the primary focus is data entry, querying the data supports data verification. - Learn the interface that supports data entry. This could be a web page or a Microsoft Windows application. Each interface will have its own unique design. - Focus on Create, Alter and Drop statements. These statements give definition to the database objects. Examples include creation of tables, stored procedures, views, functions and triggers. The Alter statement supports modification. Drop removes an object. - Take a relational database design course or read a relational database design book. Global Knowledge courses Developing Microsoft SQL Server 2014 Databases (M20464), Introduction to SQL Databases (M10985), or Developing SQL Databases (M20762) are excellent options. “Database Design for Mere Mortals”, by Michael Hernandez is a fabulous first book. - Learn to map business questions to objects that need to exist. Take this question: Who are the best ten customers in terms of revenue and loyalty? Tables need to exist for customers, orders, time and perhaps customer satisfaction. This single question represents many topics – customers, orders, loyalty (probably over time), and revenue. - Start small. Two ways to do this are to: join a team as a junior database administrator (DBA) and learn from more senior people or volunteer at a nonprofit or small business to help with their database. Spend time learning from the staff about their business and database platform. - Focus on T-SQL statements such as Grant, Revoke and Deny. These all deal with security which is a primary responsibility of a DBA. These statements control access to objects that have been created by the database designer. These statements control whether a person is allowed to see data, modify data, create tables, drop tables or any other privilege. - Focus on performance and metadata. Learn dynamic management views, system stored procedures, and system functions that deal with metadata. Administering Microsoft SQL Server 2014 Databases (M20462) is a good starting point for those pursuing a database administration role. Although each role has its specialty, every role needs the ability to query the database, or database metadata (i.e. the objects within the database). Every person needs an understanding of the Select statement. Many roles need an understanding of statements such as create, alter, drop, insert, update, delete, grant, revoke and deny. The place to begin in T-SQL is with the select statement, regardless of role. Querying Microsoft SQL Server 2014 (M20461) Administering Microsoft SQL Server 2014 Databases (M20462) Developing Microsoft SQL Server 2014 Databases (M20464) Implementing Data Models and Reports with Microsoft SQL Server 2014 (M20466) Querying Data with Transact-SQL (M20761) Developing SQL Databases (M20762)
<urn:uuid:d6029acc-14f6-45a9-be98-67d8dbc139fc>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2016/11/15/starting-points-to-jumpstart-a-t-sql-career/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00157-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888302
1,166
3.3125
3
Black Box Explains...Fiber optic cable construction Fiber optic cable consists of a core, cladding, coating, strengthening fibers, and cable jacket. This is the physical medium that transports optical data signals from an attached light source to a receiving device. The core is a single continuous strand of glass or plastic that’s measured (in microns) by the size of its outer diameter. The larger the core, the more light the cable can carry. All fiber optic cable is sized according to its core’s outer diameter. The three multimode sizes most commonly available are 50, 62.5, and 100 microns. Single-mode cores are generally less than 9 microns. This is a thin layer that surrounds the fiber core and serves as a boundary that contains the light waves and causes the refraction, enabling data to travel throughout the length of the fiber segment. This is a layer of plastic that surrounds the core and cladding to reinforce the fiber core, help absorb shocks, and provide extra protection against excessive cable bends. These buffer coatings are measured in microns (µ) and can range from 250 to 900 microns. These components help protect the core against crushing forces and excessive tension during installation. The materials can range from Kevlar® to wire strands to gel-filled sleeves. This is the outer layer of any cable. Most fiber optic cables have an orange jacket, although some types can have black or yellow jackets.
<urn:uuid:8317e7d5-498f-4c4f-9ec3-cda7ef1a1cf5>
CC-MAIN-2017-04
https://www.blackbox.com/en-au/products/black-box-explains/black-box-explains-digital-optic-cable
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907354
303
3.671875
4
Detecting unauthorized access with Microsoft Proxy Server When it comes to protecting your network from Internet users with malicious intent, many administrators rely on expensive third-party software for their Internet firewall. However, if you have a copy of Microsoft BackOffice, you already have a copy of the Microsoft Proxy Server. In this article, we'll introduce you to Proxy Server. We'll then go on to explain some options you can use to configure it to act as an effective Internet firewall. What is Proxy Server? Proxy server is probably the least publicized member of the BackOffice suite. Compared with other BackOffice components--such as SMS, Exchange, or SQL--very little documentation is available on Proxy Server. In case you're unfamiliar with Proxy Server, it's a collection of Windows NT services and a user interface that allows you to share an Internet connection among multiple computers. How does Proxy Server work? If you're using static IP addresses, you can use ipconfig to see the TCP/IP configuration as the Windows server sees it. The information displayed isn't simply a regurgitation of what's inserted into the TCP/IP properties sheet--rather, it's a way to tell if Windows has accepted the address that you've used. By default, ipconfig lists the IP address, subnet mask, and default gateway of each network adapter. If you require more detailed information, you can use the /all switch after the ipconfig command. Doing so will cause the ipconfig program to display more detailed information, such as the MAC address of each network card, and an indication of whether the address was provided by a DHCP server. Like every other Internet firewall, a Proxy Server must have two NIC cards. One NIC card connects to your Internet connection; the other connects to a hub that links the server to the rest of your network. As such, the Proxy Server acts as a router that moves traffic back and forth between your local network and the Internet. Because of the insecure nature of the Internet, only certain types of traffic should be able to move across the router. For example, you'd never want anyone who's trying to illegally access your network from across the Internet to get the IP address of any of your servers. Therefore, the Proxy Server hides every IP address on your network--except for its own--from the Internet. When a computer on your network needs to access an Internet resource, it contacts the Proxy Server. The Proxy Server then connects to the desired resource using its own IP address. Once the resource has been acquired, the requested information is routed back to the computer that originally requested it. And because no internal IP addresses ever reach the outside world, you can save money by using bogus IP addresses on every computer except for the Proxy Server. Just as you don't want people on the Internet to find out the IP addresses of your servers, you don't want them to be able to snoop around on your network. To prevent this type of access, you must configure Proxy Server to disable all TCP/IP ports except ones needed. When you initially set up your Proxy Server, it's safe to say that you'll do everything you can to make your proxy firewall secure. But how do you know if someone is trying to break in to your network, and what can you do about such an attempt? You can easily accomplish intruder detection through some of the built-in Proxy Server settings. To adjust these settings, open Microsoft Management Console and load the appropriate snap-in. Proxy Server has three types of proxy agent: Web Proxy, Winsock Proxy, and Socks Proxy. Although these proxy agents control different areas of routing, the configuration options they contain are almost identical. Space restrictions prevent me from discussing each area in detail, so I'll use Winsock Proxy for my example. Just keep in mind that to have a truly secure Proxy Server, you must secure all three types of proxy agents. To take a closer look at some of the security options, navigate to Console Root|Internet Information Server|your server|Winsock Proxy. Right-click on Winsock Proxy and select Properties from the resulting context menu. When you do, you'll see the Winsock Proxy Service properties sheet. Click the Security button on the Service tab to open the Security properties sheet. It contains several tabs that can be used to enhance network security, as long as the Proxy Server has direct Internet access. For example, you can use the Domain Filters tab to enable domain filtering. By doing so, you can either grant or deny access to all domains except the ones you specify. The next step in determining whether anyone makes an attempt at accessing your network is to select the Alerting tab, which lets you trigger an alert based on various conditions (such as a rejected packet or a protocol violation). You can send the alert message via e-mail, or you can add the alert to the Windows NT event log. After establishing such settings, you can keep an eye out for these conditions. One or two isolated attempts probably don't mean anything--however, if you detect multiple attempts, you'll need to do something about it. Disabling unnecessary protocols |"Although users can be given unlimited access to protocols, I strongly recommend denying access to any protocol that your users don't require for their jobs. "| One way of protecting your network if you detect an attack is to disable all TCP/IP ports and protocols that aren't absolutely necessary, including inbound and outbound protocols. To do so, return to the Winsock Properties sheet and select the Permissions tab. The Permissions tab contains a drop-down list of every protocol that Proxy Server knows about. To control which users are allowed to use a protocol, select a protocol from the drop-down list and click Edit. Although you can grant all your users unlimited access by using the Unlimited Access option, I strongly recommend denying access to any protocol that your users don't require for their jobs. Adding and removing protocols The next area that you should look at is the Protocols tab, which lists every protocol Proxy Server knows about. You can use this tab to add protocols or remove existing protocols. You can also change the port assignments for any given protocol. To do so, simply select the protocol and click Edit to display the protocol's initial connection port. You'll also see a list of port numbers that are allowed for inbound and outbound connections. I recommend disabling any inbound ports that aren't necessary. If you're feeling brave, you can get rid of unwanted protocols altogether. Whatever you decide to do, though, just be sure to make a backup first or to write down the settings you've changed--just in case you accidentally remove a required port or protocol, or if you need to use a specific port or protocol in the future. // Brien M. Posey is an MCSE who works as a freelance writer and as the Director of Information Systems for a national chain of health care facilities. His past experience includes working as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:4549da2e-4940-4899-a299-9b5f36bc2abe>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/623301/Detecting-unauthorized-access-with-Microsoft-Proxy-Server.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00553-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909106
1,466
2.53125
3
Content protection is a security framework that you can use to protect the data in your app. Content protection addresses the problem of someone stealing a BlackBerry smartphone and copying its data, which may be possible even when data is encrypted and the smartphone is locked. Content protection encrypts data in such a way that the encryption key is inaccessible when the smartphone is locked. There are three requirements to implement content protection in a Java app: - There is content protection functionality on every BlackBerry smartphone. To use it, the smartphone must have a smartphone password, and content protection must be enabled by the smartphone user or by an IT policy rule. - To protect data in an app, the app must subscribe to the content protection framework by registering a listener. - Content protection functionality is triggered by the user locking and unlocking the smartphone. Content protection can be used to encrypt data in String objects or byte arrays. Content protection can apply to data that is not persisted, but the Content Protection API contains specific functionality for the persistent store. Whenever an app attempts to encrypt an object, the unencrypted version of the object is marked with a special bit called a plaintext bit. Any object marked with a plaintext bit is presumed to contain unencrypted, sensitive data. An app specifies which data it considers to be sensitive by encrypting and decrypting objects, and marking these objects with plaintext bits. Until an attempt has been made to encrypt data, the smartphone assumes that that data is not sensitive. When the smartphone is locked, the content protection framework uses these plaintext bits to ensure that as many plaintext objects as possible are erased from smartphone memory. Here are the basic steps for implementing content protection for your app: - Subscribe to content protection. - Implement the PersistentContentListener interface. - Register the listener using PersistentContent.addListener() or PersistentContent.addWeakListener(). - Encode and decode your objects, as required, using PersistentContent.encode() and PersistentContent.decode(). In the context of content protection, encoding means to encrypt and/or compress objects. Compressing data with content protection In addition to encrypting data, content protection can be used to compress data. Compression and encryption are enabled separately. As with encryption, apps can compress data by passing data objects to the content protection framework and then storing the altered versions that the framework returns. Apps can only compress data if a BlackBerry smartphone user has enabled compression on their smartphone. The content protection framework does not compress all data that is passed to it. For example, anything smaller than 32 bytes is not compressed because the overhead associated with the compression process would result in compressed data that is larger than the original, uncompressed data. Enabling content protection For content protection to be in effect, the following conditions must be met: The BlackBerry smartphone must have a password set. Content protection must be enabled on the smartphone, either through an IT policy rule or by the BlackBerry smartphone user. (Content protection is not enabled by default.) The app must subscribe to the content protection framework. When content protection is first enabled on a smartphone, the smartphone must be locked and remain locked long enough (a few minutes) for the smartphone to fully enable content protection and remove all unencrypted, sensitive data. Each time a smartphone transitions from having content protection turned off to having it turned on, the following occurs: - The content protection framework notifies all registered apps to re-encode their data. During this process, data may be encrypted. - The smartphone waits two minutes before it attempts to fully enable content protection. This waiting period is designed to guard against the possibility that the user only wanted to lock the smartphone temporarily. - The smartphone displays a dialog box reading, "Content Protection is being enabled. This operation may take a few minutes to complete." Content protection is only considered to be fully enabled after all registered apps have finished encoding their data, the content protection framework has cleaned all the plaintext objects it can from memory, the encryption keys are no longer available, and the smartphone is securely locked. This process may take some time to complete, depending on the amount of data that needs to be encoded. - The content protection framework enforces additional safeguards, such as having garbage collection zero out deleted data that was previously encrypted. Enabling content protection through IT policy rules In a BlackBerry Enterprise Server environment, administrators can use IT policy rules to turn content protection on and off on the BlackBerry smartphones they administrate. There are several IT policy settings that pertain to content protection: Content Protection Usage IT policy rule: Determines whether smartphone users can enable content protection on their devices. When this rule is set to disallowed, users cannot turn on content protection. When this rule is set to allowed, users can choose whether or not to use content protection. The Content Protection Usage IT policy rule cannot be used to force users to use content protection; that is handled by the Content Protection Strength IT policy rule. Content Protection Strength IT policy rule: Specifies the strength of the key used to encrypt data when the smartphone is locked. If the Content Protection Strength IT policy is set, content protection cannot be turned off. Content Protection of Contact List IT policy rule: Includes or excludes contacts from content protection. Two Factor Content Protection Usage IT policy rule: Specifies whether the smartphone uses an installed smart card reader and certificate for certain content protection operations. This option is only present if a supported smart card driver is installed. Force Content Protection Of Master Keys IT policy rule: Enables or disables content protection for smartphone transport keys. IT policy rules for content protection only specify a minimum level of security. Provided that content protection has not been forbidden by an IT policy, users can increase the strength of encryption used on their smartphones. For more information about the IT policy rules that affect content protection, see the BlackBerry Enterprise Solution Security Technical Overview, available at http://docs.blackberry.com/en/admin. Enabling content protection by a smartphone user The following content protection options allow BlackBerry smartphone users to specify the strength and extent of content protection on their smartphones: Encrypt: Enables or disables content protection and media encryption. Strength: Dictates the strength of the key used to encrypt data when the smartphone is locked. Include Contacts: Includes or excludes certain contact fields from content protection. When content protection for contacts is turned on, the caller's name does not appear on the screen if the user receives a call from a contact when the smartphone is locked. If contacts are excluded from content protection, the user is presented with the caller's name and user picture, as usual. Include Media Files: Specifies whether media files stored on the internal media card (eMMC) should be encrypted. Although it is grouped with the content protection options, this option actually has nothing to do with content protection. Content protection is not used to encrypt media files and generally only applies to data that is ultimately written to the smartphone's application storage. Two-factor Protection: Specifies whether the smartphone uses an installed smart card reader and certificate for certain content protection operations. This option is only present if a supported smart card driver is installed. Understanding locked and unlocked smartphone states The content protection framework encrypts data differently when the BlackBerry smartphone is locked and when it is unlocked. When the smartphone is unlocked, it is generally considered appropriate for security requirements to be relatively lax. Therefore, all the cryptographic keys necessary to decrypt data are readily available. Apps that subscribe to the content protection framework can encrypt, decrypt, compress, or decompress data at any time. The content protection framework does not dictate which data should be considered sensitive or when that data should be encrypted or compressed; it is up to each individual app to implement the available content protection APIs in a way that makes the most sense for that app. When the smartphone is locked, it is generally considered appropriate for security requirements to be much stricter. Therefore, the content protection framework only allows apps to encrypt data, not decrypt it, and it ensures that garbage collection removes any unencrypted, potentially sensitive data as promptly as possible. When an app receives potentially sensitive data while the smartphone is locked, the app should encrypt the data immediately. The content protection framework uses one cryptographic key to encrypt data when the BlackBerry smartphone is unlocked and a different set of keys to encrypt data when the smartphone is locked. When the smartphone is unlocked, the content protection framework encrypts all data passed to it using a symmetric cryptographic key known as the content protection key or bulk key. The bulk key is a 256-bit AES key. It can encrypt and decrypt data quickly, making it ideal for transcoding data when the smartphone is unlocked and the user is unlikely to tolerate long delays or reduced performance that might result from using a slower and more cumbersome system of keys. The bulk key can't be used when the smartphone is locked because it can decrypt data that it previously encrypted. When the smartphone is locked, the bulk key must be hidden from potential attackers by encrypting it using the device password. When the smartphone is locked, the content protection framework uses a more complicated system of paired private and public ECC keys to encrypt data. The content protection framework can encrypt in any one of three different encryption strengths when a smartphone is locked, each of which corresponds to a different key size: Strong keys: 160-bit ECC keys (equivalent to an 80-bit symmetric key) Stronger keys: 238-bit ECC keys (equivalent to a 128-bit symmetric key) Strongest keys: 571-bit ECC keys (equivalent to a 256-bit symmetric key) The content protection framework decides which set of keys to use based on the smartphone's Encryption Strength setting, as configured through either the Options application or an IT policy. If the setting is Stronger or Strongest, then when content protection is turned on for the first time, the user is asked to generate random data by pressing random keys or scrolling around the screen. This random data is incorporated into the Stronger and Strongest keys, making them more random and thus more secure. Each pair of ECC keys consists of a public key and a private key. When the smartphone is locked, only the public ECC key is available; the private ECC key, like the bulk key, is hidden and encrypted with the device password. This makes it possible to encrypt data so that it can only be decrypted when the smartphone is unlocked and the private ECC key once again becomes available. The paired ECC keys are also known as the long-term public key and the long-term private key. They are used in combination with a pair of cryptographic keys that are only used once, the one-time public key and the one-time private key. The long-term public key is involved in encrypting every piece of new data that arrives on the smartphone when it is locked, while the long-term private key remains encrypted (and so hidden and inaccessible) for as long as the smartphone is locked. The long-term private key is only used after the smartphone is unlocked, to aid in decrypting those data objects that were previously encrypted using the long-term public key. Every time new data arrives on the smartphone and an application makes it into a data object and passes that object to the content protection framework, a new pair of one-time keys is created, which are only used to encrypt that one data object. Encrypting data when the smartphone is locked To encrypt a new data object when a smartphone is locked, the content protection framework starts by combining the long-term public key with the one-time private key to produce a symmetric key. This symmetric key is similar to the symmetric AES bulk key that is used to encrypt data when the smartphone is unlocked. This symmetric key is used to encrypt the new data. Once the data has been successfully encrypted, the one-time private key and the symmetric key it helped create are deleted and wiped from the smartphone. The one-time public key is embedded within the encoding that represents the encrypted data. This leaves just the long-term public key and the one-time public key available. With only those two keys, there is no way to decrypt the encrypted object. Decrypting data when the smartphone is unlocked When a smartphone is unlocked, the AES bulk key and long-term private ECC key are decrypted and once again made available for the content protection framework to use. At this point, any data encrypted by the content protection framework can be decrypted. The content protection framework automatically determines when the bulk key can be used to decrypt data or if the corresponding one-time symmetric key needs to be recomputed. When the smartphone is unlocked and the long-term private key is available, the long-term public key and the one-time private key can be combined so that that they create a single symmetric key. This key is identical to the key that is produced when the long-term private key and the one-time public key are combined. To decrypt an object that was encrypted when the smartphone was locked, the content protection framework combines the long-term private key with the one-time public key. This is relatively easy because when an object is encrypted using a combination of public and private keys, the relevant one-time public key is stored with it. Data that was originally encrypted when the smartphone was unlocked can be decrypted using the AES bulk key, since that is the same key that the content protection framework used to encrypt the data. The content protection framework tracks the number of times it has been asked to decode data that was encrypted when the smartphone was locked. Once it reaches the threshold number of 2053 data objects, the content protection framework sends out a notification requesting that all apps verify the integrity of their data and re-encode any data that does not conform to the current content protection settings or that was not encoded using the bulk key. By re-encoding data when it receives such a notification, an app can improve its performance, since an object encrypted using the bulk key is much quicker to decrypt than one encrypted using the two-key ECC scheme. Encrypting data when the smartphone is unlocked When a smartphone is unlocked, the content protection framework encrypts data using the AES bulk key. Apps that subscribe to the content protection framework can encrypt and decrypt data at any time, as long as the smartphone is unlocked. Locking the smartphone When a smartphone locks, the content protection framework sends out a "state changed" notification to all registered listeners. Apps that subscribe to the content protection framework can use this notification as a trigger to encrypt any of their data that has not already been encrypted. As soon as all the registered listeners indicate that they no longer need the bulk key, the framework hides the bulk key and private ECC keys. Once these keys are hidden, only the long-term public ECC keys are available. The framework initiates garbage collection of any plaintext object that is not in use. (A plaintext object is an object marked with a plaintext bit. It is presumed to contain unencrypted, sensitive data. It is by encrypting and decrypting objects, and thereby marking them with plaintext bits, that an app specifies which data it considers to be sensitive.) It may not be possible to remove some plaintext objects because, for example, they are referenced by apps that do not use content protection or that do not use it well. In an effort to eliminate the possibility of confidential information being held by an app when the smartphone is locked, the content protection framework attempts to close all open screens. For each screen that is open, it does the equivalent of pressing the Escape key. This is a best effort approach, and when a key press is not able to close a screen the smartphone stops trying to close that screen.
<urn:uuid:2b24eb67-e5f8-4b8a-adb3-33455bd4df78>
CC-MAIN-2017-04
http://developer.blackberry.com/bbos/java/documentation/content_protection_intro_1981828_11.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884993
3,272
3.25
3
What is OWASP? In December 2001, the Open Web Application Security Project (OWASP) was established as an international not-for-profit organization aimed at web security discussions and enhancements. For practically their entire existence, OWASP has kept track of perhaps every type of hack that could be done. Everything from social engineering, poor authentication systems cross-site scripting, DOM XSS, SQL injection, general software vulnerabilities, and more. Basically OWASP kept track and encouraged the web community to continually secure everything as best as possible. OWASP's mission has always been to encourage the best security practices by not only highlighting the most exploited and critical vulnerabilities but also acting as leadership in the security community to ensure education and understanding reach as many administrators as possible. Since 1999, the Common Vulnerabilities and Exposures (CVE) dictionary has existed to keep track of and alerts consumers and developers alike of known software vulnerabilities. OWASP has kept itself mostly focused on keeping record of the most common CVEs during its tenure, and usually its suggestions focused on understanding the vulnerabilities by general categorization. Now, after an initial attempt in 2009 and reviewing industry feedback, OWASP is focusing on a strictly defined standardization to help prevent CVEs in the first place. Standardizing Security's More Dynamic Side When dealing with web application security, there exists a trifecta of three main areas of entry that are most commonly exploited by hackers: - the people that hold privileged access to the application; - the services that support the application; - the functions of the application itself. Privileged access is valued the most, especially when going for high-value targets and not just trying to blindly run a few scripts against a website. There exists a critical amount of social engineering in all of the biggest hacks in the past few years. In fact, even the most prestigious security researchers themselves are not immune from such techniques. Kaspersky Lab, a prominent figure in the security industry famous for uncovering nation-state attacks such as Stuxnet, recently found itself the target of just such an incredibly detailed and intricate precision spear phishing attack not seen outside of clandestine cyber warfare against Iran and other nations. Services are also high-value targets, especially as of recently. The infamous HEARTBLEED and Shellshock vulnerabilities were not part of the most popular categories in OWASP top 10 list, but that did not stop them from quickly becoming among the most critical of the past decade. The services that support a web application find themselves usually in one of two categories when it comes to attacks: a specific vulnerability exploited with precise focus, such as a 0day, or a broad vulnerability attacking a major weakness, such as a distributed denial-of-service attack. Typically, most service-based attacks fall in the latter category, but recently precision attacks have been making headlines, namely due to their widespread effect because of the ubiquity of the software being used. However, the functions of the web application itself fall into the most commonly exploited categories year after year. For over a decade, the SQL injection vulnerability remained at the top of OWASP's top 10 list of vulnerabilities, with over 6,500 major, widespread vulnerabilities in 15 years affecting both open- and closed-source software. The difficulty in preventing these kinds of attacks stems from the fact that the web application itself is highly dynamic, thus no easy "apply this patch" sort of fix exists. It is through the Application Security Verification Standard (ASVS) that OWASP intends to provide focus to development's dynamic by providing strict and explicitly defined security guidelines. How Netsparker Can Help in Writing More Secure Web Applications Typically a web application security scanner is applied after the fact, when the development of the web application has mostly been done already. Yet development, at the time of writing the code itself, can benefit from a web scanner as well. In good coding practice, unit tests are employed in all major functional areas of software. Here, too, a scanner can be used effectively as another level of unit testing. From the screenshot above you can already see how Netsparker can provide a thorough assessment of not only particular vulnerabilities, but how they are classified by various existing definitions and standards, such as PCI compliance and OWASP vulnerability classifications. This is indeed a highly useful tool when investigating a web application, however it is usually applied after the application is mostly developed, as we mentioned earlier. Introducing Security During the Early Stages of Web Application Development In fact, major organizations like Microsoft encourage the practice of running security analysis synchronously with development -- known as a Security Development Lifecycle. Netsparker Cloud even has an API system that could be triggered from continual build systems, like Atlassian Bamboo or Jenkins, to provide real-time and automated web application security audits. These assessments and classifications can be equally, if not more so useful during the development stage, as they save time, money, and potential major headaches. Introduction to OWASP ASVS The OWASP ASVS standard has various levels of classification, ranged 0 through 3, starting a cursory verification (preliminary scans, for example) all the way through advanced where the application is secured against all known and potential threats. By definition, the zeroth classification is intended by OWASP to be where scanners are utilized, but Netsparker provides opportunity to reach all the way to the extended areas of advanced classification, too. This is because of Netsparker's in-depth heuristics, advanced scanning features including authentication and user input, and especially its incredible flexibility to be fine-tuned for specifics that are unique to each application. In the OWASP ASVS standard, there exist various verification requirement categories, such as V2 - Authentication, V3 - Session Management, and so forth. Within these categories are specific requirements that must be met in order to satisfy various classification levels. For example, in the V2 - Authentication requirement category, V2.6 requires developers "[v]erify all authentication controls fail securely to ensure attackers cannot log in" in order to meet at least level 1 "Opportunistic" certification. Netsparker can go beyond the level 0 cursory scanning, helping to meet even level 3 "Advanced" certification by assisting a development team in testing and validating their application, in this instance by testing to validate the V2.6 requirement. Other categories can find much benefit in the Netsparker web security scanner, too. The V5 requirement category – "Malicious Input Handling" – is one of many categories where Netsparker can particularly excel. V5.10, for example, requires developers "[v]erify that the runtime environment is not susceptible to SQL Injection, or that security controls prevent SQL Injection" – an area Netsparker checks thoroughly. In fact, Netsparker is capable of identifying over 200 kinds of vulnerabilities, far exceeding the number of vulnerabilities to secure against to meet ASVS level 3 certification. Utilize Tools to Comply with OWASP ASVS A web scanner need not be limited to only finding after-the-fact vulnerabilities. Properly utilized, Netsparker can help a development team satisfy even the most advanced requirements of the OWASP Application Security Verification Standard, in almost every category. With a good set of tools and a clever use thereof, being ASVS certified is as simple as point and click.
<urn:uuid:4cdf0c69-92be-4cc8-92bf-42c6e1b699de>
CC-MAIN-2017-04
https://www.netsparker.com/blog/web-security/owasp-asvs-web-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942969
1,531
2.609375
3
How to…Set Up a SOHO Network The need and ability to work from home has increased dramatically in the past few years. With lower prices on high-speed Internet and networking equipment, many people are discovering that building and maintaining a small office/home office (SOHO) network isn’t as difficult as it used to be. The SOHO network solves many of the problems of owning multiple PCs, such as sharing information between users and having an affordable way to get many computers onto one Internet connection. Let’s discuss some of the components needed in order to build a SOHO network. The foundation of the network is its central connecting device, which can be a hub, switch or router. Switches and hubs aren’t as useful in the SOHO environment because they aren’t smart enough to look at the IP addresses of our internal computers. Routers will provide this functionality and some additional functions that will be required to set up the network. The routers that are primarily used for small environments are considered all-in-one devices because they include many features, such as dynamic host configuration protocol (DHCP), network address translation (NAT) and a firewall. There are a few popular brands of small network routers like Linksys, Dlink and Netgear. The example presented here uses a Linksys 4-port broadband router. You also will need a network card in each of your computers. This can be purchased separately, but most computers come with them built in. To connect your computers to your router, you will need network cabling. You can purchase CAT5 (100 Mbps) or CAT5e (1,000 Mbps) pre-built, or make it yourself. In order to get the 1,000 Mbps data transfer rate, your router and network cards must support it. The final piece of hardware is the modem that will connect your router to the Internet. The modem will most likely come from your ISP. The operating system can be just about anything you like, such as any Microsoft Windows product, Linux or Mac OS X. When a single computer talks on the Internet, you have two numbers that serve to identify you, both locally, as well as on the Internet. The first, your Media Access Control (MAC), is permanently assigned to any device with networking capabilities, such as your network card. This cannot be changed (normally) and is used by local network devices to identify you. The second number is your IP address. This is used by computers on the Internet for tracking and sending data back to you. There are two types of IP addresses: routable and non-routable. Although both can be used by routers, this means that some IP addresses can be used on the Internet (routable) and some cannot (non-routable). Your routable IP address will most likely be automatically assigned to your modem by your ISP. We use non-routable IP addresses on our internal network. Your computer also uses ports to tell the remote computer what application you want to communicate with. For example, port 80 is Web traffic (HTTP). In Figure 1, notice the modem is copying the IP and MAC of the computer. That’s because the modem is representing the computer on the Internet. The modem copies the network card MAC, stopping you from swapping computers on the modem. It will only talk to 0101010101aa. Putting the Network Together Let’s get started assembling the network components. You will connect the modem to the router using a network cable, usually CAT5. The router should have an RJ45 port that is labeled WAN or modem—this is the port that connects to the modem. On the Linksys router, there are four ports available to connect computers to, and one port labeled WAN. You also will use network cabling to connect the computer’s network cards to the router. That’s all you need to do as far as components are concerned. Your router will most likely come with a CD that will automatically configure your network for you. This doesn’t always work, so we will discuss manually configuring it. Since your modem was copying your computer’s MAC address, it might be a bit confused when it is no longer connected to your computer. Your router has an interface that allows you to connect to it and change settings. Simply open the Web browser and type in the IP address of your router. Most routers use default addresses in the range of 192.168.x.x. (This is found in the documentation with the router.) The documentation that came with the router also should tell you what the default username and password will be. Once logged in, there will be several options available to you. The router needs to be able to copy your computer’s MAC address. There will be a configuration option for MAC cloning. To find out what the MAC of your computer is, simply click START>RUN and type in CMD. This will open the command prompt (Windows XP). Type in “ipconfig –all,” and under “physical address,” your MAC will be listed. Simply copy the 12-digit code into the MAC configuration screen of your router. Your router also will support DHCP. This will allow the router to automatically assign internal IP addresses to your computers. You can check to see if your computers are getting their addresses from DHCP by getting back to the command prompt and typing the same command listed above. It should state that DHCP is enabled, and your IP address should be similar to 192.168.x.x. If not, there is a configuration screen on the router that will allow you turn on DHCP. At some point, you may want to use a program on the Internet that requires someone to connect to your computer. For example, let’s say I want to connect to my home computer using telnet. The telnet program allows me to connect to a computer on port 23 from anywhere on the Internet and use the command prompt. Most routers will have all incoming traffic blocked—this is a feature of the firewall. Also, because you are using the NAT service, the router must know which computer to send the request to. Note: Network address translation simply converts several internal (non-routable) IP addresses to a single external (routable) Internet IP address. You have the ability to enable “port forwarding.” This allows you to configure the router to allow incoming traffic on certain ports to be redirected to a particular computer on your internal network. You must specify the internal IP address of the computer you wish to connect to and the port that the service will use. In the example in Figure 2, the router would check the configuration for port forwarding. It might find that traffic inbound for port 23 should be redirected to 192.168.1.3. All traffic with that port will be forwarded to the internal computer. Be careful when using port forwarding, because the router will accept connections from anyone, not just you. You also might want to share data with the users in your local network. Once your router is configured correctly, this should be an easy task. Simply enable sharing on a folder somewhere on your computer by right-clicking on the folder and choosing “Properties.” (This may differ slightly depending on your OS.) With the Properties dialog box open, there should be a tab labeled “Sharing.” The sharing tab will either have a check-box or a button that will enable sharing for that folder and its contents. Once you have enabled Sharing, you can connect to the share using one of two methods. You can go to START>RUN and type in the name of the computer, preceded by two back-slashes (\computername) and press Enter. It may open a logon box. If so, you will have to type in the username and password that were created for the computer you are connecting to. After the authentication, you will see a list of folders. Among that list shou
<urn:uuid:11f67cdf-24fd-488a-a778-dbcd3d294022>
CC-MAIN-2017-04
http://certmag.com/how-to-set-up-a-soho-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933723
1,666
2.5625
3
A computing Platform is a type of hardware and/or software where application programs may run. Operating Systems are Platforms, as are different types of computing hardware. Special-purpose Platforms include Routers, Remote Access Servers and database servers. A Platform is frequently associated with its own Credentials database. A Platform (as it relates to Identity Management (IdM)) is a type of target system. There are many possible types of platforms, including:
<urn:uuid:be3029ae-86a1-4136-ae9e-60f25f97daee>
CC-MAIN-2017-04
http://hitachi-id.com/concepts/platform.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00185-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946814
94
3.046875
3
ICT Accessibility is important today. But will it be important in 5 years time and what will it look like? What should organisations that are involved, interested or dependent on ICT Accessibility be planning for over the next 5 years? Firstly, a short definition of ICT Accessibility to ensure that we are all on the same page. The international standard ISO 9241-171:2008 (Ergonomics of human-system interaction — Part 171: Guidance on software accessibility) defines accessibility as: "Usability of a product, service, environment or facility by people with the widest range of capabilities" The term "widest range of capabilities" is really a politically correct way of saying "including people with disabilities". This article will use a slightly more limited definition: "ICT for people with disabilities including: vision, hearing, speech, muscular-skeletal, learning and ageing". Ageing is included not because it is a disability in its own right but because as we age we will tend to become less able through diseases such as Parkinson's or Alzheimer's or failing eyesight or hearing. To try and answer the questions this article will look back 5 years, look at the present and then extrapolates 5 years in to the future. ICT Accessibility is complex intertwined area so the discussion will be based around the following questions: - How important is it for an individual to access digital information? - What is the impact of laws, legislation and standards? - Are decision makers aware of the requirements and benefits? - Do the various professionals have the implementation skills? - How does technology help or hinder? How important is it for an individual to access digital information? This is the key question that influences changing views on accessibility. Primary sources of information and services were offline: paper, telephone or face-to-face. In some cases alternative formats were offered, for example Braille or large print. Some basic information (brochureware) and some bleeding edge services were available on-line. The majority of the population were not regular users of the Internet. People with disabilities had access to the information and services they needed off-line and access to digital information was not that important. However, there was an awakening to the potential benefits of access to digital information, especially amongst those with vision impairments who could access such information through screen-readers rather than being dependent on the information being transformed into another format. Digital is the preferred channel for most providers: how often do you hear/see "for more information go to our website"? This implies that the information is on the web but not available in any off-line format. Better service is now provided via online shopping, banking and travel than is available face-to-face or via the telephone. In particular there is a strong push in the public sector towards e-government as a way of providing better services more efficiently; hardcopy documents and forms will continue to be provided but only grudgingly. Some providers have gone the next step with information and services only available on-line: Amazon, iTunes, EasyJet, comparison web sites etc. Where possible the product has also gone digital: music and electronic books. We are seeing the slow death of printed books; for example Amazon now sell more electronic than paper versions of some titles and the Oxford University Press has announced that it is not going to produce another printed version of the Oxford English Dictionary, which is now only be available on-line. The other major area of push towards the need to access on-line is the meteoric rise of social networks of all sorts. Lack of access to digital information, services and products is now serious enough to have a name, 'the Digital Divide'. Those on the wrong side of the divide are now disadvantaged but can still survive. According to the Office for National Statistics about 1 in 5 UK adults are not on-line. This group includes people who are old, poor, or lack the necessary skills and also a small group who who wish to remain off-line. The British Computer Society (BCS) has just published a report that shows access to IT makes people happier; not only does it enable people to do things better but it also improves their view of their quality of life. Unfortunately some people with disabilities find themselves on the wrong side of the divide, even though they are keen to be on the right side, because the information, services and products are not provided in an accessible form. By 2015 the trend from off-line to digital information, services and products will be complete. Anything that can be provided digitally will be digital by default and will only be available in other formats by request, if at all, and probably at a premium. By this date anyone on the wrong side of the divide will find it very difficult to carry on as a member of society. They will lack access to basic government-supplied services, most commercial services such as insurance, banking, many retail outlets, and all electronic social networks. There will be pressure from a new group, "the recently old". This group will have been using digital channels for some years and will be furious if they cannot continue to do so because of illnesses of old age. As the digital divide closes down it is essential that people with disabilities are not left on the wrong side through no fault of their own and therefore everything digital needs to be accessible. It would not be overstating it to say that by 2015 access to digital information will be considered a basic human right. What is the impact of laws, legislation and standards? Legislation existed in many countries relating to disability, including the UK Disability Discrimination Act 1995 and the the US Rehabilitation Act 1973 (and in particular Section 508 1998). These laws were either limited in relation to ICT or only relevant to government, they also seemed to lack teeth. They did not have a major impact on the accessibility of most ICT systems. The W3C developed guidelines for web accessibility—the Web Content Accessibility Guidelines (WCAG 1.0) 1999. The British Standards Institute (BSI) published PAS 78: Guide to good practice in commissioning accessible websites in 2006. At this time it was not clear if the legislation applied to ICT and, if it did, whether it only applied to specific parts of ICT: did it apply to websites, did it just apply to public sector organisations? Because of this confusion the guidelines and guides were not enforced by legislation. This meant that most webmasters and their organisations were either unaware of them or ignored them. In the last year, or two, case law has made it clear that all areas of ICT are covered. Probably the most publicised example is the case against Target (a large US retail chain). An individual sued Target because its web site was not accessible and therefore he was getting a poorer service than members of the able-bodied community. It took a least two years to go through the courts. In the end it was agreed that the website had to be accessible, Target had to pay out compensation to the individual and also to a group who took out a class action, and Target had to fix the site within a given timescale. The total cost came to more that $10M. There is still a lack of awareness amongst many business decision-makers and plaintiffs are still put off pursuing claims because of the effort involved and potentially small returns. In 2010 eBay announced changes to their systems to support users of screen readers. There were good moral and financial reasons for implementing the changes, but it can be assumed that the possibility of legal action also encouraged their implementation. There are still cases going through courts, for example Donna Jodhan v the Canadian Government. The number of cases going to court is likely to decrease as organisations cry 'mea culpa' rather than spend money on legal support for a case they are likely to loose. In 2010 several acts are going through the US Senate, Mandate 376 Phase 2 is progressing through the EU, the United Nations Convention on the Rights of Persons with Disabilities has been ratified by most member states, rules and regulations are being passed through many other governments. All of these will have had a major impact by 2015. By 2015 legislation across the world should be clear and have sufficient teeth so that it cannot be ignored. As it cannot be ignored, any relevant person (manager, procurer, technician, user) will be aware of the legislation and the importance of accessibility. Are decision makers aware of the requirements and benefits? ICT systems will only be fully accessible if accessibility is built in during all phases of implementation. This will happen if the decision makers dictate that it should. Ideally the edict should come from top management but it could be at the level of procurement or a highly motivated development manager. By 2005 most decision makers were aware of the need to provide physical access to people with disabilities, most obviously users of wheelchairs. This was certainly true in the UK and North America but may not have been so common in some other parts of Europe and the World. The decision makers were aware because the laws were clear and because the problem was easy to understand: a client in a wheelchair at the bottom of a flight of stairs leading to their building was not a photo-call that a CEO wanted to deal with. The same could not be said about ICT accessibility. Firstly the law was not clear and had not been tested. But also the issue was not so easy to understand or even be aware of. If the issue was raised the initial reaction was "how can blind people use computers" not "what has to be done to our systems to make them easy to use by people who are blind?". The users were only beginning to push for ICT accessibility because access to ICT was less important and because alternative formats such as braille and large print were the main requirement. Today the situation is not so different to 2005, with most decision makers still not being aware of the need for accessible ICT. The biggest improvement has been in the public sector where legislation has made the requirement clear. In the US, Section 508 makes it mandatory for government organisations and in the UK the push to e-government and the Disability Equality Duty have raised the awareness significantly. The commercial sector is only just beginning to understand and be aware through court cases such as Target and by major organisations, most recently eBay, realising the importance of accessibility and going public with the changes they have made and the benefits to their clients and to their organisations. The decision makers are also becoming more aware because of the noise being generated by disabled users. People are complaining when systems are not accessible and these complaints are beginning to percolate up to those who can instigate the changes. By 2015 most decision makers will be aware of the need for accessible ICT, this greater awareness will be driven by: - Legislation will have been extended, given more power and written to explicit include ICT. - Disabled Users will become more vocal. - The ageing population will include users who expect to be able to access digital information and who will not accept that age related illnesses have removed that ability. - The economic imperative to move towards digital information will highlight the need to make that information available to all. The only question is, will this increased awareness always ensure that the systems are made accessible? There will still be a conflict between using the latest whizzy technology and the need to ensure accessibility. Do the various professionals have the implementation skills? Even if the decision makers decided that all ICT systems should be accessible it would not be possible if the professionals who were implementing it lacked the necessary skills. The professionals include the designers, coders, content creators, and testers. A small cohort of dedicated professionals were available to implement accessible systems, but they were the exception. Most professionals knew nothing about accessibility and were not interested in finding out. Professional education ignored accessibility with tutors not understanding why it should be included. In 2010 the number of skilled professionals has grown significantly but is still a small minority of those involved in implementing and developing ICT. If there was a sudden drive to improve the accessibility of ICT then skills would become a real issue. The only way to know if an system is accessible is to test it. Testing needs to be done throughout the project and should use automated checking tools and user testing. There are an increasing number of professional testers who have the necessary skills to run the automated and user tests. There are some good signs in the education field: - Accessibility and user-centred design are now included as modules in many ICT courses, but they still tend to be add-ons delivered quite late in the schedule. Accessibility is still not built-in as an inherent part of implementation. - The BCS is reviewing accessibility across the whole of the organisation. One aspect is to look at the inclusion of accessibility in SFIAPlus, the IT skills, training and development standard. Inclusion of accessibility in the right places in SFIAPlus will have a significant long term impact on the development of accessibility skills. - Middlesex University now offers a MSc in Digital Inclusion. This trend in education should ensure that accessibility becomes business as usual in the next few years. By 2015 skilled implementers should be available and should be willing to keep their skills honed because of demands for such skills from aware decision-makers. Technology—Will Assistive Technology keep up? There are two areas of technology that need to be considered: - Assistive Technology: covers hardware and software that helps people who cannot see the screen well, or find it difficult to use a standard keyboard or mouse. - The interface between the system and the user: drives screens, keyboards and pointing devices directly and needs to be accessible to the widest possible population, but it also needs to communicate with Assistive Technologies so that users of these technologies can access all the functions of the system. Speech recognition and text to speech were both available but without being too disparaging they were both fairly clunky and were only used by those who had no option. If you were blind, text-to-speech was the main way you could get access to digital information. If you could not use a keyboard, voice recognition software did enable you to input text and control the computer. Predictive text was originally developed as an Assistive Technology, users who could only type very slowly only had to type a few letters rather than a whole word or phrase. There were a variety of alternatives to the standard mouse, ranging from bigger mice, to rollerballs, through to controlling the mouse through winking an eye. The increase in processing power and significant advances in the software now mean that solutions that were clunky in 2005 are now so good that they are being used by people without any disability as they become a natural and efficient way to interact with ICT. This has led to some assistive technologies being built in to standard products. Examples include Voiceover text-to-speech on Apple products, and voice control in new cars; saying 'call home' whilst driving is much easier and safer than fiddling with any buttons. Built-in touch technology has provided solutions for many people, for example those suffering from rheumatism or RSI, who cannot use a standard mouse. Other alternatives to standard keyboards and mice are available but due to limited demand they are expensive. There will be new forms of AT, direct brain connections, wearable devices that will enable certain people to more easily control and access their ICT environment. There will be a continuing improvement in the power available to AT: for example text to speech today tends to be fairly flat, with more power it will be possible to include emotions and clearer pronunciation and intonation. Technology—Will the User Interface be accessible? In 2005 most of the input and output was text and that meant that it was fairly easy for the Assistive Technologies to interact. Some ancillary technologies were causing problems; probably the biggest examples were Flash and PDF, which did not always interface well to the Assistive Technologies. There were also some web development tools that produced HTML that did not follow the W3C guidelines and was, by definition, not fully accessible. In fact it was difficult to find a tool that made it easy to produce accessible HTML Significant strides have been made since 2005. Most development tools can now produce websites that are accessible, the issue now is that it is still up to the creator to use the tools in the right way as the tools give very little assistance or guidance on how to create accessible sites. Adobe now provides PDF and Flash products that can be made accessible and has worked with the Assistive Technology vendors to ensure that the interface works. Unfortunately there are other new technologies that have been developed that are not accessible, for example the standard YouTube screens are not accessible; so if YouTube clips are included in a website the site is not fully accessible to users of screen readers or users who cannot use a mouse. However YouTube now supports closed captioning to support people who are deaf or hard of hearing. Developers of other widgets have not been aware of the accessibility issues and have created solutions that are not accessible. Vendors are recognising the need for solutions in specific niches, for example Xenos Axxess is a tool to create accessible transaction reports (e.g. bank statements) from non-accessible print streams. It is impossible to predict all the new user interfaces that will be used in five years time but 3D, interactive gestures and emotions will be three areas that will be commonplace. Emotions will be supported with the Emotion Markup Language (EML) that is currently being developed by the W3C. The EML will be added to text and then a text-to-speech engine will be able to vocalise the text with the right intonation or an avatar could make a suitable gesture or facial expression. The question with all of these interfaces is will the system be able to interface to the user, directly or via a suitable Assistive Technology, so that it is accessible? New and exciting interfaces will always be attractive to the marketing departments, as a way of being ahead of the competition. It will be an uphill struggle to stop them being used if they are not accessible. The likelihood is that new interfaces will be developed to include accessibility features built-in, however there will be a need for continuous vigilance by the accessibility community to ensure that this is the case. The community will have to recognise the new interfaces early and put pressure on the developers, standards bodies and users of the technology to ensure that it is accessible from first delivery. - Accessibility will not be optional: everyone who provides digital content, services or products will need to make sure that they are accessible. - There will be moral, legal and financial imperatives for this to happen. In particular there will pressure from users to be on the right side of the digital divide as a human right. - Awareness will be much higher both at the user and the supplier end. - Skill levels will have increased and should be sufficient for the demand. - New user interface technologies will need to be accessible. Ensuring this happens will be the major challenge to the accessibility community.
<urn:uuid:3e3ac14e-e521-4025-8581-79ebb9057936>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/past-present-and-future-of-ict-accessibility/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963466
3,962
3.265625
3
IBM Research Shows Off Two New Watson-Related Medical Projects IBM Research announced two new Watson-related cognitive computing projects for the medical field.IBM Research announced two new cognitive computing technologies, based on Big Blue’s Watson supercomputer system that are expected to help physicians make more informed and accurate decisions faster and to cull new insights from electronic medical records (EMR). The two Watson-related cognitive projects, known as “WatsonPaths” and “Watson EMR Assistant,” are the result of a yearlong research collaboration with faculty, physicians and students at Cleveland Clinic Lerner College of Medicine of Case Western Reserve University. Both are key projects that will create technologies that can be leveraged by Watson to advance the technology in the domain of medicine, IBM said. With the WatsonPaths project, IBM scientists have trained the system to interact with medical domain experts in a way that’s more natural for them, enabling the user to more easily understand the structured and unstructured data sources the system consulted and the path it took in offering an option. The Watson EMR Assistant project aims to enable physicians to uncover key information from patients’ medical records in order to help improve the quality and efficiency of care. "WatsonPaths is designed to augment the problem-based learning methods that Cleveland Clinic medical students employ in the classroom,” J. Eric Jelovsek, M.D., director of the Cleveland Clinic Multidisciplinary Simulation Center, said in a statement. “The vision is for WatsonPaths to act as a useful guide for students to arrive at the most likely and least likely answers to real clinical problems, but in a classroom setting. Of course, it is also easy to visualize how this type of technology could eventually be a tool for physicians to use in real-time clinical scenarios—a powerful guiding reference to consult when diagnosing and identifying the best treatment options." After displaying Watson’s capabilities on the game show, Jeopardy, where the system trounced human competitors, IBM announced that health care would be a key area of focus for future Watson applications. The new technology is not yet available for everyday commercial use.
<urn:uuid:4ab21d65-33ae-47cc-9a23-6c9282baa4ba>
CC-MAIN-2017-04
http://www.eweek.com/database/ibm-research-shows-off-two-new-watson-related-medical-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938023
448
2.734375
3
TORONTO, ONTARIO--(Marketwire - Jan. 29, 2013) - At this point of the year there is a lot of talk about how to prevent and treat the winter blues. However, a new survey commissioned by the Florida Department of Citrus (FDOC) has found that Canadians may want to put more thought into how to overcome feelings of stress all year long. According to the survey, 67 per cent of Canadians rate their daily stress as moderate or higher, with 38 per cent of that group experiencing high levels of daily stress. Additionally, 46 per cent of Canadians report that stress is not related to a specific time of the year. Although Canadians understand the effects of stress and how to improve these conditions, many do not follow through with steps they know can help them feel better: healthy food choices, exercise and a balanced diet. Canadians struggle with maintaining a healthy diet when stressed When it comes to healthy eating, behavior does not always mirror beliefs. Though three quarters (77 per cent) of Canadians believe that a balanced diet can help their physical and mental well-being and 59 per cent believe that a change in diet can help to relieve stress, many find this hard to do. In fact, over a third (36 per cent) of Canadians say that the inability to maintain a healthy diet is a side effect of stress and that stress often leads to unhealthy food choices (40 per cent). "Not feeling in control is one of the biggest contributing factors to stress and emotional eating," says Lydia Knorr, registered dietitian for the FDOC. "Canadians can maintain control of their food choices and help ease some of the symptoms of stress, by preparing food in advance and having healthy options on hand. Since hypertension is one of the side effects of stress, I recommend foods rich in potassium, such as citrus fruits, as they can help reduce the risk of high blood pressure." Exercise is not top of mind when Canadians are stressed Stress also negatively impacts how often Canadians exercise. Despite the fact that many find exercise helps relieve symptoms of stress, almost half of Canadians (43 per cent) report that they either exercise less than usual or stop exercising altogether when they are stressed. In fact, exercise, along with being with friends or family, is considered to be a popular form of stress relief for Canadians. "I find that one of the biggest challenges Canadians face is working up the energy to exercise after a long and stressful day," says Eva Redpath, fitness expert for the FDOC. "I recommend a vitamin-rich snack such as a glass of 100 per cent pure Florida orange juice or grapefruit juice, full of natural sugars, to give you the energy you need. The push is well worth it, as the workout can help you burn off some steam and feel better about maintaining a healthy lifestyle." Canadians know how to maintain a balanced diet but don't follow the rules In addition to believing that eating well and maintaining a healthy lifestyle helps us feel better in stressful circumstances, Canadians also have a good understanding of how to maintain a balanced diet. The problem is that similar to exercise, their actions don't reflect their knowledge. Although the Canada Food Guide recommends seven to 10 servings of fruits and vegetables a day, only 10 per cent of Canadians report this consumption level. The average number of daily servings of fruits and vegetables consumed by Canadians is four, with 22 per cent of Canadians only consuming one to two servings a day. Canadians can't pretend they don't know any better, as 61 per cent admit that they should ideally be consuming at least five servings of fruits and vegetables a day. "A nutritious diet and healthy lifestyle plays an integral role in supporting both physical and mental well-being," says Knorr. "Increasing your daily intake of fruits and vegetables doesn't need to be an added challenge. Simply incorporating fresh, in-season fruit like Florida grapefruit into meals can help provide a natural boost in energy while delivering essential nutrients like vitamin C. This is also an extra benefit for the one in four Canadians who report that illness is a side-effect of stress." Other highlights from the survey: - It may be no surprise that work, school and money are reported as key sources of stress by more than half of Canadians, with balancing family and other areas of life following close behind at 41 per cent. Additionally, 29 per cent of Canadians consider their health to be one of the main sources of stress in their daily life. - Most Canadians (78 per cent) do not attribute their stress to the city or neighbourhood that they live in. - Half of Canadians say that stress affects their sleep (52 per cent) and/or makes them irritable or angry (50 per cent). - 56 per cent of Canadians report eating less than five servings of fruits and vegetables a day. About the Florida Department of Citrus (FDOC) The Florida Department of Citrus (FDOC) is an executive agency of the Florida government charged with the marketing, research and regulation of the Florida citrus industry. Its activities are funded by a tax paid by growers on each box of citrus that moves through commercial channels. A few of the popular varieties of Florida citrus fruit available in Canadian supermarkets are Ruby Red Grapefruit, Flame Grapefruit, and Marsh Grapefruit with 100 per cent pure Florida orange juice and Florida grapefruit juice available all year round. About the survey The results presented are based on a survey conducted by EKOS Research Associates from December 1to 17 2012. Using EKOS' Probit© hybrid online-telephone panel, 2,026 Canadians aged 18 and over were surveyed. Data was weighted by region, age, gender and urban/rural/remote density using the most recent census data.
<urn:uuid:ac7f9899-1a31-4c80-bd06-0205832c38cc>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/canadians-and-stress-its-not-season-specific-1750552.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961898
1,178
2.546875
3
Once the Auvik collector is successfully installed on your network, it automatically searches on its local subnet to find more networks. When new networks are discovered, you can choose whether to have Auvik scan them or not. After you approve scanning, Auvik uses a tool called Nmap to identify active hosts on the network. The system looks for open ports to help identify many characteristics of the host, including make and model. Refining the topology During the network scan, Auvik looks for network devices that expose the following information: - ARP tables - Forwarding tables (Layer 2) - IP assignments - VLAN associations - Layer 1 discovery protocols, such as Link Layer Discovery Protocol (LLDP), Cisco Discovery Protocol (CDP), and Foundry Discovery Protocol (FDP) This information is pulled from SNMP through various management information bases (MIBs), and by issuing "show" commands through the CLI on network elements. From there, Auvik uses logic to determine connections and model your network. Where definitive connection information is unavailable, the system uses a set of proprietary algorithms to infer the remaining connections. On your map, you can see: - Wired and wireless connections when we have strong evidence of a physical connection - Inferred wired and inferred wireless connections when Auvik knows the connection must exist but doesn’t have the exact port mapping For example, there may be an unmanaged switch between a router and a set of PCs. The system will realize this and add an unmanaged device to your map. Refining Device Information Once a device is discovered, Auvik looks for open services so it can identify the class of device, such as printer, switch, firewall, access point, laptop, or phone. The system uses a number of tools and services to refine device information. They include: - SNMP v1/2c, and v3 - Generic credentials (public, private) are tried to identify the device using System-MIB - SSH, Telnet, and CLI - Multicast Domain Name System (mDNS) - SMB / Samba - Windows Management Instrumentation (WMI)
<urn:uuid:8080bd36-4d82-4184-ac35-48e3c075edee>
CC-MAIN-2017-04
https://support.auvik.com/hc/en-us/articles/202956414-How-does-Auvik-discover-network-topology-and-device-information-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887151
459
2.515625
3
Internet users would do well to be extra careful when attempting to use public Wi-Fi hotspots in Europe, as hackers and cyber crooks have lately ramped up their efforts to steal personal and financial data sent over these unsecure networks. The warning comes from Troels Oerting, the head of Europol’s cybercrime centre, who says that the law enforcement agencies is helping several European countries in the aftermath of these type of attacks. “We should teach users that they should not address sensitive information while being on an open insecure Wi-Fi internet,” he pointed out for the BBC. “They should do this from home where they know actually the Wi-Fi and its security.” Oerting says that the attackers aren’t inventing new attack techniques, but are using old, tried and true ones that still obviously work: they usually set up fake hotspots with a name (SSID) similar the one set up by coffee shops, stores, hotels, libraries and other public establishments. Once users connect to it and start using it, all information they send out is captured by the criminals. There are a number of things you can do and precautions you can take to make sure that you are as safe as you can be while using public Wi-Fi networks. For one, you should avoid accessing services such as online banking, ePayment services or any site that stores payment information via open public Wi-Fi. If you simply must do some online banking, use your mobile data plan with your bank’s mobile app.
<urn:uuid:b3aa33b4-a016-4fc8-bbc3-d8571865c1b5>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/03/07/using-free-wi-fi-in-europe-is-risky/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00451-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951659
324
2.765625
3
Reverse engineering has been used by the military, big companies and many more. It is the act of taking something (computer, device, weapon, software) and “stripping” it to learn or analyze its inner working in detail. Compaq, one of IBM’s major competitors, did this in the early 1980s, using the reverse engineering process to dissect the IBM PC and build their own product. In this blog post, we list 7 tools for reverse engineering on the Microsoft Windows platform that have influenced the reversing community the most. UPX (Ultimate Packer for eXecutables) is an open source executable packer that is common in the malware scene (albeit often heavily modified). UPX supports all major operating systems and both x86 and x64 platforms. UPX on its own features no anti-debug checks, no scrambled code/stolen bytes and no encryption. For this post I have coded my own software in the C language to demonstrate how UPX works, what it does to the .code/.data segment in the PE header and how you can rebuild an executable that has been packed with UPX.cram
<urn:uuid:226f3df2-76c9-47af-8542-c91b1038455d>
CC-MAIN-2017-04
https://labs.detectify.com/tag/reverse-engineering/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948863
237
2.625
3
NASA and Google are working together to send new 3D technology aloft to map the International Space Station. Google said Thursday that its Project Tango team is collaborating with scientists at NASA's Ames Research Center to integrate the company's new 3D technology into a robotic platform that will work inside the space station. The integrated technology has been dubbed SPHERES, which stands for Synchronized Position Hold, Engage, Reorient, Experimental Satellites. The technology is scheduled to launch to the orbiting station this summer, although Google a specific date hasn't been set. "The Spheres program aims to develop zero-gravity autonomous platforms that could act as robotic assistants for astronauts or perform maintenance activities independently on station," according to a Google+ post from the company's ATAP ( Advanced Technology and Projects) group. "The 3D-tracking and mapping capabilities of Project Tango would allow Spheres to reconstruct a 3D-map of the space station and, for the first time in history, enable autonomous navigation of a floating robotic platform 230 miles above the surface of the earth." The project, which includes scientists from universities, research labs and commercial partners, is led by Google's ATAP group. "Mobile devices today assume the physical world ends at the boundaries of the screen," said Johnny Lee, the Project Tango leader, in a YouTube video. "Our goal is to give mobile devices a human scale understanding of space and motion." Google's 3D sensing smartphone, which is still in the prototype phase, has customized hardware and software, including a 4-megapixel camera, motion tracking sensors, computer vision processors and integrated depth sensing. The sensors make more than a quarter of a million 3D measurements every second, fusing the information into a 3D map of the environment. NASA began working with Google last summer to get Project Tango working on the space station. The Intelligent Robotics Group at the Ames Research Center is looking to upgrade the smartphones used to power the three volleyball-sized, free-flying robots on the space station. Astronauts will exchange the current smartphones used in the Spheres with the Google prototypes. Each robotic orb is self-contained, with power, propulsion, computing and navigation equipment, along with expansion ports for additional sensors and appendages, such as cameras and wireless power transfer systems, according to NASA. "The Project Tango prototype incorporates a particularly important feature for the smart Spheres -- a 3D sensor," said Terry Fong, director of the Intelligent Robotics Group, in a statement. "This allows the satellites to do a better job of flying around on the space station and understanding where exactly they are." In February, Google and NASA scientists took the smartphone prototypes on a zero-gravity test flight. The engineers used the flight to calibrate the device's motion-tracking and positioning code to function properly in space. NASA scientists say they envision 3D-enabled Spheres could be used to inspect the outside of the space station or the exterior of deep space vehicles. While Google's 3D technology is set to go to the space station this summer, a SpaceX resupply mission, which will carry legs for the humanoid robot working on the orbiter, is slated to launch this afternoon. SpaceX was set to launch its third resupply mission on Monday but the liftoff was postponed due to a leak in the Falcon 9 rocket that will carry the Dragon cargo spacecraft aloft. This article, Google tech to bring 3D mapping smarts to NASA's space station robot, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org.
<urn:uuid:94edba14-3107-4812-900d-dfbfd144b921>
CC-MAIN-2017-04
http://www.computerworld.com/article/2488465/vertical-it/google-tech-to-bring-3d-mapping-smarts-to-nasa-s-space-station-robots.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00571-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922024
794
3.09375
3
Todays smartphones are pretty good computers, but weve tried out three computational powerhouses that make the slimmest phone look like ENIAC. (The worlds first electronic computer, unveiled in 1946, weighed 27 tons and consumed 1800 square feet of floor space.) The Cotton Candy, the MK802 II, and the Raspberry Pi are amazingly tiny, incredibly inexpensive, and eminently customizable. They make terrific platforms for hobbyists fond of experimentation, and theyre ideal for students interested in learning how to program, but they can also serve as ordinary productivity machines. These miniature marvels eschew the power-hungry x86 processors found in desktop and laptop PCs in favor of mobile CPUs and GPUs, but each relies on an external monitor or HDTVconnected via HDMIto display its user interface and other video output. In fact, the Cotton Candy and the MK802 II are the same shape and size as a USB memory stick, and plug directly into a TV. Thanks to that skimpy hardware, these computers can operate on just the trickle of energy provided by the display theyre connected to. Alternatively, you can plug in the same type of USB AC power adapter that modern smartphones and tablets use. Much of the appeal of these pint-size PCs lies in their software versatility. Each device can boot from a MicroSD card containing an operating system disc image (typically some flavor of Android or a Linux distro tailored to its hardware set). If your tinkering utterly demolishes the stability of the OS, you can just overwrite the memory card with a new image and start over. After comprehensive testing, I found that each of these micro PCs has its upside and downside, but all three devices shine in distinctly different scenarios. What youll need to provide Although each of these micro PCs is incredibly inexpensive, youll need to spend a little more cash on peripherals and accessories to render them completely functional. You'll absolutely need a USB mouse and keyboard, for instance, although you could borrow the input devices from another computer you already own. Raspberry Pi buyers will want to pick up an enclosure for protection (the device arrives as a populated circuit board sans case). Depending on the port selection on your device, you might need to grab a USB hub to connect your peripherals. Be aware, however, that not every AC adapter will provide enough juice for the computer and a passive hub. (In my situation, the charger for my Kindle Fire did, but my smartphone charger did not.) You might also need to provide some of your own cables: The MK802 II and Cotton Candy come bundled with enough cables for the typical usage scenario, for instance, but nothing is included with the Raspberry Pi. If your chosen device lacks on-board flash storage, you'll also want to buy a MicroSD card on which to burn your operating system disc image. Suppliers who sell micro PCs typically also stock cards with various OSs preburned on them; but if you want to do it yourself, a tutorial at eLinux.org will walk you through the process. How we tested I had difficulty finding benchmark software that behaved consistently across all three platforms. In consultation with the PCWorld Labs crew, I initially planned to use three browser-based benchmarksSunspider, Peacekeeper, and WebVizBenchbut the fact that each device used a different browser made an apples-to-apples-to-apples comparison impossible. Not only that, but having each computer connect to the Internet via Wi-Fi created yet another uncontrollable variable. In the end, I was skeptical enough of the results these tools produced in this particular scenario that I elected not to disclose the results; I just didnt think they proved anything. Although this decision renders my opinion more subjective than usual, Im confident it was the right way to go. The tiny-PC market is evolving so rapidly that products are in danger of becoming obsolete before they hit virtual retail shelves. FXI Technologies announced its Cotton Candy micro PC just over one year ago, but the company has had to put the device through numerous design changes to keep it competitive. The result is a product that could appeal to both consumers and business userswhenever it ships as a finished product, that is. As of this writing, the Cotton Candys firmware and operating system are still in beta, and the manufacturer states that the device in its current state is intended only for developer use. Developer units are available for purchase direct from the manufacturer for $199. For this evaluation, FXI sent us a unit with two MicroSD cards containing beta builds of Android Ice Cream Sandwich and Ubuntu Linux, respectively. The manufacturer is in the process of certifying the Cotton Candy with Google, but currently you cannot load apps from the Google Play Store unless you load a user-created Android OS image that's downloadable from the FXI user forums. For more, read our full review. The MK802 II is an intriguing Android-on-a-stick computer made by the Chinese manufacturer Rikomagic. It resembles a USB thumb drive. Inside the cheap-feeling plastic case, youll find a pedestrian, single-core Allwinner A10 CPU (based on ARM's Cortex-A8 architecture); 1GB of RAM; and 4GB of flash storage (half of which is consumed by the rooted Android 4.0.4 operating system, aka Ice Cream Sandwich). The unit's Mali 400 GPU is theoretically capable of playing 1080p video, although the stick seems stuck at 720p for other applications. This tiny computer is available from a few small online retailers, including W2Comp.com, which is where we acquired ours. The firm is based in Hong Kong, but it sells the MK802 II for a very competitive $55including free shipping to the continental United States. It took a while to reach us after crossing the ocean and clearing U.S. Customs. For more, read our complete review. Very small computersbased mostly on the 6.7-by-6.7-inch Mini-ITX motherboardhave been around for a while, but the launch of the 3.4-by-2.2-inch Raspberry Pi generated a frenzy of public interest. The model A (256MB of memory, one USB port, no LAN port) sells for $25, while the model B reviewed here goes for just $35. The model B has 512MB of RAM, two USB ports, and 10/100MB ethernet. The design intent behind the Raspberry Pi was to rekindle interest in computing as a childrens hobby, with modern PCs having become too expensive for parents to allow their kids to experiment with them. But the machine has become a hit with grown-ups, too, and the tiny computer has spawned dozens of competitors. The nonprofit Raspberry Pi Foundation recently announced plans to build 30,000 units each month. For more, read our entire review. This story, "Raspberry Pi, Cotton Candy, and MK802 II face off in this battle of pint-size PCs" was originally published by PCWorld. Microsoft has set March 26 as the end date for support of the original Windows 10 edition that arrived... No Tax Knowledge Needed. TurboTax will ask you easy questions to get to know you and fill in all the... From data scooping to facial recognition, Amazon’s latest additions give devs new, wide-ranging powers... Sponsored by Puppet Your phone is now your digital briefcase, your ever-present business companion. Remote workers who use their home office as their primary office are arguably at even more of a risk... Google's co-founder admits he didn't pay attention to AI in the 1990s because he didn't think it would...
<urn:uuid:aa5d1d6b-a879-408e-96d4-eb19f00cce0b>
CC-MAIN-2017-04
http://www.itworld.com/article/2717286/operating-systems/raspberry-pi--cotton-candy--and-mk802-ii-face-off-in-this-battle-of-pint-size-pcs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947058
1,583
2.671875
3
Free Speech Online: Where Is the Line Drawn? When America’s Founding Fathers drafted the First Amendment, no one could have imagined there would eventually be a technology — the Internet — that would allow Americans to speak to people around the globe within seconds. Now, more than 200 years later, some wonder if the existence of this modern technology complicates one of America’s basic rights: the freedom of speech. “The Internet doesn’t change the dynamic in any fundamental way. What it does is it presses hard on some existing problems,” said John Palfrey, faculty co-director of the Berkman Center for Internet & Society at Harvard UniversityOpenNet Initiative. and a principal investigator with “If I say something that’s harmful about you online, it can be read instantaneously by billions of people around the world at basically no cost. The number of people who can hear [that] speech [can] be vastly greater than it could have been before, and many more people are holding the megaphone that could reach that large group of people.” Free-speech advocates argue that despite this scope and speed, it’s unnecessary to create laws that restrict speech online. Others disagree. On the Internet, there’s no segregation of material, no cellophane wrapper, nothing to protect children from seeing graphic pornography unless you’re proactive. Welcome to the Village Screen Before the rise of technology, communities came together at the village green, but now with the Internet, people from around the globe meet on the “village screen,” and that presents some unique challenges, according to Gene Policinski, vice president and executive director of the First Amendment Center. “We have more opportunities to express ourselves than we had even 20 years ago,” Policinski explained. “Speech that might have gone unnoticed, that might have caused no harm, now gets noticed [and] can be global and eternal. We’re seeing comments about one’s employer, one’s principal or one’s teacher — that might have been scrawled on the wall or in a note — now posted on a Facebook page.” Even though this is a new wrinkle in the free speech debate, Policinski doesn’t see the need for new laws. “I’m very wary of proposals that restrict speech just on the Web for some special reason,” he explained. “I’m sure when the telegraph, telephone, radio and TV were new, everybody thought we needed special kinds of regulations [on] that speech.” Brock Meeks, director of communications for the Center for Democracy & Technology, agrees. “We want prosecutors to use the laws that are on the books right now to go after the perpetrators of crime on the Internet, not to create new laws just because something is being carried out in cyberspace,” he said. “To put those kinds of restrictions online or to treat the Internet differently than the nonelectronic world just doesn’t work.” The government has tried to do this before and failed. Take the Communications Decency Act (CDA), which “was the very first piece of legislation that tried to put restrictions on how people spoke on the Internet,” Meeks said. In 1997, the Supreme Court struck down the CDA except for Section 230, which says “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This decision gave the Internet the same free speech protection as print. “You can print four-letter words in magazines and newspapers, and it’s not against the law,” Meeks said. “[But] you can’t say it on TV on open broadcast networks without getting in trouble [with] the [Federal Communications Commission]. On the Internet, those standards don’t exist, those laws do not transfer, and the 1996 ACLU v. Reno [decision] cemented that First Amendment protection.” There are certain forms of speech that are not protected under the First Amendment, though, such as defamation, certain types of incitement to violence and child pornography — that speech is illegal both in the online and offline mediums, Palfrey said. Policinski believes the government should define these limits “with great caution.” “One person’s hate speech is another person’s political statement,” he explained. “The First Amendment really exists to protect speech on the fringe because if it’s speech we all agree is fine, it doesn’t need to be protected. So by definition, the First Amendment protects speech that pushes the limits of what you or I or someone else might find comfortable.” That’s exactly what happened during the Civil Rights Movement — people talked about issues many Americans weren’t comfortable with. If we didn’t have the freedom of speech, the United States might be in a different place today. “Civil rights advocates would have been labeled hate speakers for trying to upset the customs, habits and sometimes laws of the nation regarding segregation,” Policinski said. “You just have to imagine what our society would be like today had they been prevented from speaking even though at the time perhaps the majority of Americans didn’t want to hear what they had to say.” What About Pornography? In 2006, there were 4.2 million pornographic Web sites, 420 million pornographic Web pages and 68 million daily pornographic search engine requests, according to the Internet Filter Review. Because of this prevalence, Donna Rice Hughes, president of Enough Is Enough, doesn’t believe the status quo works, especially when a simple search for “water sports” can return sites with urination pornography. The Internet has thrown open the doors to pornography for adults and, even more unsettling, for children, she said. “The early pioneers in the Internet industry will tell you behind closed doors that one of the ways they [made] and still do make money is because of the access that people have to pornography,” Hughes explained. “But having an entire generation of youth fed a steady diet of very hard-core material is not worth that price.” According to Hughes, there are three types of pornography: child pornography, obscenity and indecency. In the U.S., it is a federal crime to make, produce, distribute or possess child pornography. Obscenity — also not protected under the First Amendment — refers to hard-core material or deviant forms of pornography such as bestiality, incest and rape. “[Still], it’s everywhere on the Internet because the federal obscenity statutes are not being aggressively enforced,” she said. The third kind of pornography is indecency, which is “programming [that] contains patently offensive sexual or excretory material that does not rise to the level of obscenity,” according to the FCC. Indecency is constitutionally protected for consenting adults, but not for minor children. The Child Online Protection Act (COPA), which was enjoined in 1998 as soon as it was signed into law, would have protected minors from these forms of pornography on the Internet. After being jostled about in the courts for 10 years, the Supreme Court declined to hear the case again in January, effectively killing COPA, said Hughes, who was on the COPA Commission. “It never went into effect and it never will go into effect because it’s dead now,” she explained. “The net result is that all these years there has not been a cyber brown wrapper, if you will, to screen minor children from getting into any of these porn sites online.” Hughes would like to see the same standards of decency for broadcast applied to the Internet. “The Internet shouldn’t get a free pass,” she said. “Since it has become the M.O. of how we communicate, then shouldn’t we have some rules for the road? “If you could turn on the television and see people having sex, women having sex with dogs, people urinating in sexual ways, then that would be the same as the Internet. With television, if you want to get something that’s adult, you have to opt in to get it. When you turn on the Internet, you’ve got everything.” But Hughes doesn’t believe this will change, as evidenced by what happened to the CDA and COPA. “To go in and shift the paradigm to where everything’s locked down, and if you want free access to everything you’ve got to start opting out of the safe zone, that’s a huge jump from where we are. I don’t think it’s going to happen,” she said. Enough Is Enough has developed a three-pronged solution to provide a safe environment for children. First, end users — especially those responsible for children — need to be educated on the dangers that exist on the Internet and implement safety measures to protect kids. Second, the technology industry must implement IT solutions and develop family-friendly policies. Third, there must be aggressive enforcement of existing laws and enactment of new laws to stop “the sexual exploitation and victimization of children using the Internet,” according to the organization’s Web site. “You can’t expect parents and the public to enforce the law, and you can’t expect government to parent kids,” Hughes said. “Everybody’s got a unique role, and if everyone’s doing their part, then you’ve got a very strong chance that kids are going to be much safer online. But we’ve still got a long way to go in each of those areas.” How Do Other Countries Tackle This Issue? Not every country is as tolerant of free speech as the U.S. According to the 2007 OpenNet Initiative study, 25 out of 41 countries surveyed engaged in Internet censorship, and that number is on the rise, Palfrey said. The most basic form of censorship can be found in Saudi Arabia, where there is a single gateway that everyone has to go through. “Whenever somebody tries to access the Internet from Saudi Arabia, it goes through this proxy system,” Palfrey said. “The request from the user is judged against a blacklist, which says, ‘Is this site acceptable material or not?’ If it’s on the blacklist, they do not return the page.” In direct contrast to that is China’s filtering system, which is a complicated multi-level strategy with a gateway at every possible level, and many people share the responsibility of filtering the Internet. “They [effectively] erected the Great Firewall of China around the edge of the country, [which] turned out to be porous,” Palfrey said. “So at the Internet service provider level, there are blocks for material that [is] deemed to be harmful; there are blocks on search engines, including Google and others based in the United States; there are blocks through blog servers; there are blocks at the university level; there are blocks at the cybercafe level; and so forth.” China is one of the most repressive filtering regimes. Anything that is a threat to its form of government or way of life is censored, Meeks said. “Let’s look, for example, at the big earthquake that happened in China,” he explained. “People got all upset because there [were] a lot of schools that crumbled and children died. People got on the Internet criticizing the way the government handled that construction. The Chinese government stepped in and started to shut down access to information about construction and arrested people who were speaking out against the government.” But Meeks doesn’t believe the Internet can be censored effectively even in China. “[China has] their hand on the information pipe, and they squeeze it pretty tight,” he said. “There are ways to get around that, and people are finding ways to circumvent the Chinese censors all the time. But it’s kind of like escalating warfare. The Chinese clamp down harder, and then new tools spring up and find better and faster ways of circumventing that censorship. The Chinese government [then] retaliates by finding out what those are and clamping down even harder — so it goes back and forth.” One might argue that any type of censorship runs contrary to the nature of the Internet, which is inherently about the free flow of information. “One of the great advantages of being able to use the Internet is that people feel empowered to say things that they may not say face-to-face,” Meeks said. “If you are being censored, it chills the way you speak; it chills the way you use the Internet. It drops to the lowest common denominator, so things become no more useful than the dialogue taking place in an elementary school classroom.” – Lindsay Edmonds Wickman, editor (at) certmag (dot) com
<urn:uuid:8a7c0467-da0f-4586-8ca1-7d988e7651ed>
CC-MAIN-2017-04
http://certmag.com/free-speech-online-where-is-the-line-drawn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952711
2,813
2.875
3
eDirectory is multi-threaded for performance reasons. In multi-threading, when the system is busy, more threads are created to handle the load and some threads are terminated to avoid extra overhead. It is inefficient and costly to frequently create and destroy threads. Instead of spawning new threads and destroying them for every task, a number of threads are started and placed in a pool. The system allocates the threads from the thread pool to several tasks as needed. Tasks are held in two types of queues: Tasks that need immediate scheduling are held in the Ready queue. Tasks that need scheduling at a later time are held in the Waiting queue. Not every module uses the thread pool. The actual number of threads for the process is more than the number that exists in the thread pool. For example, FLAIM manages its background threads separately. Running the ndstrace -c threads command returns the following thread pool statistics: The total number of threads that are spawned, terminated, and idle. The total number of worker threads currently and the peak number of worker threads. The number of tasks and peak number of tasks in the Ready queue. The minimum, maximum and average number of microseconds spent in the Ready queue. The current and maximum number of tasks in the Waiting queue. An example of a sample thread pool: There are certain thread pool parameters: n4u.server.max-threads: Maximum number of threads that can be available in the pool. n4u.server.idle-threads: Maximum number of idle threads that can be available in the pool. n4u.server.start-threads: Number of threads started. Run the ndsconfig get and ndsconfig set commands to get and set the thread pool size.
<urn:uuid:96cbce4c-a887-49ed-91e6-a9dd0de082f9>
CC-MAIN-2017-04
https://www.netiq.com/documentation/edir88/edir88tuning/data/bqmh9c0.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93854
375
2.765625
3
Turney then makes two simple observations. First, every human pupil is dark brown, regardless of the color of the iris, which encloses the pupil and determines the color of the eye. Second, blue is the lightest color of human iris. The consequence of these two observations is that the size of the pupil is easiest to determine in blue eyes. If you face people with different eye colors and must determine whether each person likes or is interested in you, with all else equal, it is easiest to read the blue-eyed person’s level of interest or attraction. Turney’s argument, which I believe might be true, is that blue-eyed people are considered attractive as potential mates because it is easiest to determined whether they are interested in us or not. It is easier to “read the minds” of people with blue eyes than of those with eyes of any other color, at least when it comes to interest or attraction.
<urn:uuid:9cb1f02c-a8d3-465b-b65e-e7d09193b52d>
CC-MAIN-2017-04
https://danielmiessler.com/blog/a-theory-for-why-blue-eyes-are-attractive-psychology-today/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945169
195
2.921875
3
Flash memory manufacturers are constantly improving on solid state technology. The per-gigabyte cost of solid state technology falls year by year, and more and more computer users are trading in their hard disk drives for solid state drives. As the world slowly adopts a solid state standard, solid state drives are becoming more frequent visitors to our data recovery lab. Gillware works hard to stay on top of advancements in flash memory. If your SSD has failed, our SSD data recovery technicians can help you. What Is an SSD? An SSD with the same dimensions as a laptop-sized 2.5″ hard disk drive Solid state drives are a bit like oversized USB thumb drives. If you crack open the case of your thumb drive, you’ll see a green printed circuit board with two or more black chips on it. The larger black chip is the NAND flash memory chip, where all the data on your thumb drive lives. Some high-capacity flash drives may have more than one NAND chip. NAND flash memory is “non-volatile”, meaning the information it stores stays there after the device is powered off. (An example of volatile memory is your computer’s RAM, which stores some data to improve efficiency and speed while you use your computer, and then returns the data to your hard drive when you turn off your computer.) The basic design principle behind solid state drives is the same. However, SSDs are much more complex and have many more features than your typical USB drive. A solid state drive has many more NAND chips, and has a SATA or PCIe connection instead of a USB plug. The multiple NAND chips inside your SSD actually work similar to the multiple drives in a RAID array. Data is constantly being written to and pulled from multiple chips, instead of just one at a time. This is one of the reasons why SSDs are so speedy and efficient compared to other flash memory devices. Your thumb drive, for example, is not very fast at all compared to a SSD, even though it uses the same type of NAND chip. In addition, some (but not all) SSDs have SDRAM chips. These chips work just like the RAM in your computer. SDRAM chips are faster than NAND chips, but they are also volatile memory chips. They only store data when the drive is powered on, and flush it all out when the drive is turned off. SSDs can load programs or files into the SDRAM chip for faster performance and then dump them back into the NAND chips when the drive is properly shut down. The next time you power on the SSD, the SDRAM chip is completely blank and ready to be used again. If you’d like more information about what’s inside your SSD, one of our video blogs goes into greater detail here. Solid state drives can come in a wide variety of shapes and sizes. Traditional hard disk drives are constrained by the sizes of their platters and other components. SSDs have much less bulky components, and those components can be arranged in many different ways. In many modern Apple notebooks, the internal solid state drive is a long, thin, rectangular shape. An SSD can also be short and stout. Many SSDs are made to conform to the 2.5” form factor of laptop hard drives as well. This makes them much easier to integrate into laptops that came with traditional spinning-platter hard drives inside them. Two SSDs with very different form factors that were brought to us for our solid state drive data recovery services Solid State Drive Failure Solid state drives have no moving parts. They are not limited by the top speed of a spinning platter or the movement of a read/write head stack assembly. This is one reason for their lightning-fast performance. An added benefit is that solid state drives are very durable. But this doesn’t make them indestructible or failure-proof. They are merely less prone to certain kinds of failure than hard disk drives. (For example, you will never see a solid state drive with failed heads or spindle motors, since they have none.) SSDs, like hard disk drives, have firmware of their own. Since the underlying technology is so different, SSD firmware resembles hard drive firmware about as much as a calzone resembles a meat pie. But like hard drive firmware, SSD firmware controls, supplies, and regulates access to the data on the drive. And just like hard drive firmware, SSD firmware can fail. For example, among some models of Intel SSDs there is a notorious firmware bug known as the “8MB bug”. This firmware bug makes the SSD show up as uninitialized (unformatted, or “raw”), with a total capacity of eight megabytes. NAND chips, like hard drive platters, suffer from old age. There is a set amount of times you can read and write data to and from a single cell in a NAND chip before that cell becomes unusable. This limit is typically hundreds of thousands of read/write actions. The SSD’s firmware and controller help regulate cell use in a process known as “wear leveling”. SSDs try to distribute data equally across the NAND chips so that each cell lasts about equally as long as its neighbor. The wear leveling process can fail due to a firmware problem, or some cells could just die prematurely. SSD Data Recovery Process In the case of logical damage (accidental reformat, file deletion, corruption, etc), SSD data recovery is just like hard drive data recovery. While the underlying technology is drastically different, SSDs have sectors, clusters, blocks, partitions, and filesystems, just as hard drives do. But when there is something wrong with the SSD’s physical components, the SSD data recovery process becomes much more complex. Fixing an SSD with a failed PCB is nothing like fixing a hard drive with a failed PCB. In one way, it is easier. The platters inside your hard drive can only be read inside your hard drive. But the NAND chips on your SSD can be removed from the PCB and read in a chip reader. The problem is that the raw data inside the SSD’s NAND chips is jumbled and incoherent. Mixed in with the user data is all the data the SSD uses for its internal operations. This data is useless to the user, but vital to the device. During an SSD’s normal operations, data passes through the controller on its way to and from the NAND chips. The controller assembles the data from the chips to make it usable. When the PCB has failed, SSD data recovery involves removing each NAND chip and reading each one’s contents individually. Next, custom software has to be written to do the controller’s job. The raw data from each NAND chip is strung together, the user-irrelevant information is stripped out, and the puzzle pieces are reassembled in the proper order. Challenges to SSD Data Recovery Many modern SSDs are self-encrypting. For these models of SSD, while the user may choose not to enable password-protection on their SSD, all of the data is still stored in an encrypted format by default. The encryption key is stored in the same controller that manages the flow of data to and from the NAND chips. If the controller dies, the encryption key is lost. This can make data recovery from SSDs that have failed impossible. This is an extremely high barrier to our SSD data recovery efforts. However, Gillware Data Recovery has partnered with data recovery experts, SSD manufacturers, and security organizations to make data recovery from self-encrypting SSDs more possible. We recovered data from our first self-encrypting SSD in 2012. Since then, the number of models of self-encrypted drives our solid state drive data recovery technicians have been able to salvage data from has been growing. Why Choose Gillware for My SSD Data Recovery Needs? At Gillware, we work hard to stay on the cutting edge of solid state data storage technology. As a leader and pioneer in the field of SSD data recovery, we also provide financially risk-free data recovery services. We charge no upfront fees for data recovery, and if we cannot recover your critical data, you don’t owe us a dime. Talk to one of our recovery client advisors to see if our SSD data recovery services are right for you. Ready to Have Gillware Assist You with Your SSD Data Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:1a38e8de-e66f-44f2-9cba-fc7f3e4513a7>
CC-MAIN-2017-04
https://www.gillware.com/ssd-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936909
2,316
2.96875
3
The assembler directives do not emit machine language but, as the name direct the assembler to perform certain operations only during the assembly process. Here are a number of directives that we shall discuss. CSECT Identifies the start or continuation of a control section. the start or continuation of a dummy control section, which is used to pass data to subroutines. EJECT Start a new page before continuing the assembler listing. END End of the assembler module or control section. EQU Equate a symbol to a name or number. LTORG Begin the literal pool. PRINT Sets some options for the assembly listing. SPACE Provides for line spacing in the assembler listing. START Define the start of the first control section in a program. TITLE Provide a title at the top of each page of assembler listing. USING Indicates the base registers to use in addressing. By definition, a control section (CSECT), is “a block of coding that can be relocated (independent of other coding) without altering the operating logic of the program.”* Every program to be executed must have at least one control section. If the program has only one control section, as is usually the case, we may begin it with either a CSECT or START directive. According to Abel, a START directive “defines the start of the first control in a program”**, though he occasionally contradicts himself. We shall later discuss reasons why a program might need more than one control section. In this case, it is probably best to use only the CSECT directive. * The definition is taken from page 109 of Programming Assembler Peter Abel, 3rd Edition, ISBN 0 – 13 –728924 – 3. The segment in quotes is taken directly from Abel, who also has it in quotes. The source is some IBM document. ** Abel, page 577. But see page 40. Abel has trouble giving a definition. A DSECT (Dummy Section) is used to describe a data area without actually reserving any storage for it. This is used to pass arguments from one program to another. Consider a main program and a subroutine. The main program will use the standard data definitions to lay out the data. The subroutine will use a DSECT, with the same structure, in order to reference the original data. The calling mechanism will pass the address of the original data. The subroutine will associate that with its DSECT and use the structure found in the DSECT to generate proper addresses for the arguments. We shall discuss Dummy Sections in more detail later. The END statement must be the last statement of an assembler control section. The form of the statement is quite simple. It is So, our first program had the following structure. Some program statements Note that it easily could have been the following. Some program statements The EQU directive is used to equate a name with an expression, symbolic address, or number. Whenever this name is used as a symbol, it is replaced. We might do something, such as the following, which makes the symbol R12 to be equal to 12, and replaced by that value when the assembler is run. R12 EQU 12 There are also uses in which symbolic addresses are equated. Consider this example. PRINT DC CL133’’ P EQU PRINT Each symbol references the same address One can also use the location counter, denoted by “*”, to set the symbol equal to the current address. This example sets the symbol RETURN to the current address. RETURN EQU * BRANCH TO HERE FOR NORMAL RETURN The Location Counter As the assembler reads the text of a program, from top to bottom, it establishes the amount of memory required for each instruction or item of data. The Location Counter is used to establish the address for each item. Consider an instruction or data item that requires N bytes for storage. The action of the assembler can be thought of as follows: assembler produces the binary machine language equivalent of the data item or instruction. This bit of machine language is N bytes long. 2. The machine language fragment is stored at address LC (Location Counter). Location Counter is incremented by N. The new value is used to store the next data item or instruction. The location counter is denoted by the asterisk “*”. One might have code such as. SAVE DS CL3 KEEP EQU *+5 Suppose the symbol SAVE is associated with location X’3012’. It reserves 3 bytes for storage, so the location counter is set to X’3015’ after assembling the item. The symbol KEEP is now associated with X’3015’ + X’5’ = X’301A’ Literal Pool contains a collection of anonymous constant definitions, which are generated by the assembler. The LTORG directive defines the start of a literal pool. some textbooks may imply that the LTORG directive is not necessary for use of literals, your instructor’s experience is different. It appears that an explicit LTORG directive is required if the program uses literal arguments. classic form of the statement is as follows, where the “L” of “LTORG” is to be found in column 10 of the listing. this statement should be placed near the end of the listing, as in the next example taken from an actual program. 240 * LITERAL POOL 000308 242 LTORG * 000308 00000001 243 =F'1' 000000 244 END LAB1 Here, line 243 shows a literal that is inserted by the assembler. This directive controls several options that impact the appearance of the listing. Two common variants are: PRINT ON,NOGEN,NODATA WE USE THIS FOR NOW PRINT ON,GEN,NODATA USE THIS WHEN STUDYING MACROS The first operand is the listing option. It has two values: ON or OFF. ON – Print the program listing from this point on. This is the normal setting. OFF – Do not print the listing. The second operand controls the listing of macros, which are single statements that expand into multiple statements. We shall investigate them later. The two options for this operand are NOGEN and GEN. GEN – Print all the statements that a macro generates. NOGEN – Suppress the generated code. This is the standard option. The third operand controls printing of the hexadecimal values of constants. DATA Print the full hexadecimal value of NODATA Print only the leftmost 16 hex digits of the constants. A typical use would be found in our first lab assignment. BALR R12,0 ESTABLISH USING *,R12 ADDRESSABILITY structure of this pair of instructions is entirely logical, though it may appear as quite strange. note that the USING *,R12 is a directive, so that it does not generate binary machine language code. The BALR R12,0 is an incomplete subroutine call. It loads the address of the next instruction (the one following the USING, since that is not an instruction) into R12 in preparation for a Branch and Link that is never executed. The USING * part of the directive tells the assembler to use R12 as a base register and begin displacements for addressing from the next instruction. mechanism, base register and offset, is used by IBM in order to save space. It serves to save memory space. We shall study it later. Directives Associated with the Listing Here is a list of some of the directives used to affect the appearance of the printed listing that usually was a result of the program execution process. In our class, this listing can be seen in the Output Queue, but is never actually printed on paper. As a result, these directives are mostly curiosities. EJECT This causes a page to be ejected before it is full. The assembler keeps a count of lines on a page and will automatically eject when a specified count (maybe 66) is reached. One can issue an early page break. SPACE This tells the assembler to place a number of blank lines between each line of the text in the listing. Values are 1, 2, . SPACE 1 Each causes normal spacing of the lines SPACE 2 Double spacing; one blank line after each line of text SPACE 3 Triple spacing; 2 blank lines after each line of text. TITLE This allows any descriptive title to be placed at the top of each listing page. The title is placed between two single quotes. TITLE ‘THIS IS A GOOD TITLE’
<urn:uuid:184e8971-6d9d-464f-b7e1-4c358c0a4888>
CC-MAIN-2017-04
http://edwardbosworth.com/MY3121_LectureSlides_HTML/AssemblerDirectives.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.871853
1,958
3.5625
4
source: http://www.securityfocus.com/bid/1884/info Samba is a set of of programs that allow Windows® clients access to a Unix server's filespace and printers over NetBIOS. A directory traversal vulnerability exists in Microsoft's implementation of the SMB file and print sharing protocol for Windows 95 build 490.r6 and Windows for Workgroups. smbclient normally rejects '/../' sequences in user-supplied pathnames before submitting them to the server. This prevents an attacker from traversing the server's directory tree and accessing files which would normally be inaccessible. Because the check for '/../' is peformed by smbclient, the server assumes the client is filtering invalid input. However, a modified client can be made to accept the restricted '/../' sequences, appending these characters to filenames and submitting them as a request to the server. Since the server leaves this input validation up to the client, once the server is provided with path information which contains '/../', it assumes it to be valid. As a result, a directory traversal becomes possible, granting an attacker access to normally-restricted portions of the host's filesystem. This can lead to the disclosure of security-related information, leaving the host open to further compromise. Connect to a resource using smbclient. Issue commands "cd ../" or "cd ..." Related ExploitsTrying to match OSVDBs (1): 19007 Other Possible E-DB Search Terms: Microsoft Windows 95/WfW, Microsoft Windows 95, Microsoft Windows
<urn:uuid:1e3d3af5-976a-4c52-9fb5-94a1e0830fed>
CC-MAIN-2017-04
https://www.exploit-db.com/exploits/20371/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.856673
320
2.609375
3
Hot Stuff! Identity Management One of the biggest acronyms in infosec is AAA (pronounced “triple-A”). Not a reference to a well-known motorists association, this refers to the “Big Three” of infosec: Administration, Authorization, and Authentication. Of these three, Authentication refers to establishing some proof of identity, and Authorization refers to using identity to control access to requested resources and services. Finally, administration comes into play in that somebody must establish and manage access control lists, define groups, manage permissions, and so forth, to establish controls to meet organizational security policy and access control requirements. In essence, identity management fits into both authentication and authorization, in that it’s designed to manage access to applications and networks based on user identity. What makes this topic and related tools and technologies interesting is that organizations have to build ways to provide secure access to information and applications for users who may be in-house or operating at a remote location outside the organizational security perimeter. For operations that must be available to employees, partners, and customers alike, boundaries get wide and sometimes fuzzy, in a hurry. According to Rutrell Yasin, author of an excellent story “What is Identity Management,” an identity management system incorporates some or all of the following building blocks: - Password reset: automates password functions while enforcing password policy, and enables users to reset their own passwords and unlock accounts without resorting to help staff. Special authentication questions and answers help identify users without requiring passwords per se. - Password synchronization: allows users to employ the same password to access multiple systems, services, and resources but does not require the kinds of changes to company IT infrastructures that single sign-on (SSO) systems often require. - Single sign-on (SSO): raises the bar from password synchronization to permit users to access all necessary systems and applications through a single login. Various products that include CA’s eTrust Single Sign-on, Passlogix Single Sign-On, and so forth manage user authentication and provide proper credentials to systems and applications as access to them is requested. - Access management software: controls access to systems and applications, typically using one or more methods to authenticate users including passwords, digital certificates, and hardware or software access tokens. Yasin’s story provides numerous examples of vendor products to meet such needs that support single, dual, and even multi-factor authentication technologies. The key idea is to provide centralized control over how identity information is obtained, stored, managed, and delivered to systems and applications on behalf of users, either inside or outside organizational security boundaries. As such, this is a fascinating technology for companies seeking to reach out beyond their physical locations to better engage employees, partners, contractors, customers, and other individuals who must access their systems and resources.
<urn:uuid:09ac8605-86c8-4b45-941e-864c8e382b31>
CC-MAIN-2017-04
http://certmag.com/hot-stuff-identity-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905008
586
2.703125
3
Sommer R.,International Center for Agricultural Research in the Dry Areas | Glazirina M.,International Center for Agricultural Research in the Dry Areas | Yuldashev T.,International Center for Agricultural Research in the Dry Areas | Otarov A.,Kazakh Research Institute Of Soil Science And Agro Chemistry After U U Uspanov | And 16 more authors. Agriculture, Ecosystems and Environment | Year: 2013 Climate change (CC) may pose a challenge to agriculture and rural livelihoods in Central Asia, but in-depth studies are lacking. To address the issue, crop growth and yield of 14 wheat varieties grown on 18 sites in key agro-ecological zones of Kazakhstan, Kyrgyzstan, Uzbekistan and Tajikistan in response to CC were assessed. Three future periods affected by the two projections on CC (SRES A1B and A2) were considered and compared against historic (1961-1990) figures. The impact on wheat was simulated with the CropSyst model distinguishing three levels of agronomic management. Averaged across the two emission scenarios, three future periods and management scenarios, wheat yields increased by 12% in response to the projected CC on 14 of the 18 sites. However, wheat response to CC varied between sites, soils, varieties, agronomic management and futures, highlighting the need to consider all these factors in CC impact studies. The increase in temperature in response to CC was the most important factor that led to earlier and faster crop growth, and higher biomass accumulation and yield. The moderate projected increase in precipitation had only an insignificant positive effect on crop yields under rainfed conditions, because of the increasing evaporative demand of the crop under future higher temperatures. However, in combination with improved transpiration use efficiency in response to elevated atmospheric CO2 concentrations, irrigation water requirements of wheat did not increase. Simulations show that in areas under rainfed spring wheat in the north and for some irrigated winter wheat areas in the south of Central Asia, CC will involve hotter temperatures during flowering and thus an increased risk of flower sterility and reduction in grain yield. Shallow groundwater and saline soils already nowadays influence crop production in many irrigated areas of Central Asia, and could offset productivity gains in response to more beneficial winter and spring temperatures under CC. Adaptive changes in sowing dates, cultivar traits and inputs, on the other hand, might lead to further yield increases. © 2013 Elsevier B.V. Source
<urn:uuid:121528b2-41ce-48af-8694-b13fa8da30dc>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/central-asian-scientific-research-institute-for-irrigation-891457/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00554-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923132
510
2.671875
3
A compact, hot plug fiber optic transceiver, the Small Form-Factor pluggable (SFP) transceiver is used in fiber optic communications for telecommunication and data communications applications. SFP is the interface between a network device mother board and a fiber optic or copper cable network cable. The sfp transceiver is able to support Gigabit Ethernet, fiber channel, SONET, and a number of other communications standards. For your information that in the near future, SFP will expand to the SFP+. At the time, the data rate in 10 Gbit/s is a chievable, including 8 billion fiber channel. Compared to Xenpack or XFP type of modules, all of their circuitry inside, an SFP+ module leaves some of its circuitry to be implemented on the host board. There is a huge change in the optical transciever is a vailable, each with different transmitter or receiver. This allows the user configure and customize the transceiver to get the proper optical reach with either a multi-mode fiber or single-mode fiber type. In addition, the optical SFP module comes in four categories -SX, which is 850nm, LX, which is 1310nm, ZX, which is 1550nm and DWDM. All of them have an interface of a copper cable which permits a mother board to communicate via USTP (unshielded twisted-pair) cable network. There are also a coarse wavelength division multiplexing and two-way optical fiber cable, single mode 1310/1490 nm upsteam and downstream. Actually available, the sfp transceiver has the capability transfer rates of up to 4.25 Gbit/s. XFP, a form factor which is virtually identical to the SFP type, increases this amount by nearly three times, at 10Gbit/s. The sfp transceiver is specified and made compatible via a multi-source agreement (MSA) between manufacturers, so that different users who may use equipment from different manufacturers and provides can work effectively and smoothly without worrying about errors and inconveniences. The GBIC interface is the precursor to the sfp, hence it’s nicknamed as mini gbic. However, the SFP allows greater port density (number of transceivers per inch along the edge of a mother board) than the GBIC. Also exist the identical Small Form-Factor (SFF) transceiver which is about similar size as the SFP. Rather than plugged into an edge-card socket, it is directly attached to the mother board as a pin through-hole device. Digital optical monitoring (DOM) or digital diagnostics monitoring (DOM) funcations are supported by the modern optical SFP transceiver according to the industry specifications of the SFP-8472 MSA. The user has the ability to constantly monitor real-time parameters of the SFP, such as optical input/outp power, supply voltage and laser bias current because of this feature. That have been said, I’m very glad to know that optical transceiver is a very popular format, it is recommended to use a considerable amount of fiber optic component suppliers. These companies carry sfp transceivers for all Cisco devices together with transceiver modules for many other manufacturers. So, if you need technology solutions for your networking applications, you now know what to look for.
<urn:uuid:6c6c39d4-5e94-4148-b163-eebc83fd52ad>
CC-MAIN-2017-04
http://www.fs.com/blog/digging-deeper-into-small-form-factor-pluggable-transceivers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92598
692
2.609375
3
It's not everyday you can get deeply involved in a space program by sitting at your computer. Space scientists at The Royal Observatory, Greenwich have started up a crowd sourcing project that lets anyone with a PC spot and track solar storms. The project, known as Solar Stormwatch uses real data from NASA's STEREO (Solar TErrestrial RElations Observatory)satellites, which are currently in orbit around the Sun and provide researchers with constant details about activities on the Sun's surface. NASA'S STEREO satellites -- one ahead of Earth in its orbit, the other trailing behind - were launched in 2006 and trace the flow of energy and matter from the Sun to Earth. Layer 8 Extra: NASA telescopes watch cosmic violence, mysteries unravel Scientists hope that mass participation in the Stormwatch project will let them keep track of and untangle data that it would take much longer to look at otherwise. The more people who can do this process, the more we will be able to know about one of these storms and which direction it's going in and exactly how fast. The collective measurements by lots of people is worth a lot more than a subjective opinion of one person, said Chris Davis, a scientist with Rutherford Appleton Laboratory. Solar Stormwatch volunteers can spot these storms and track their progress across space towards the Earth, the group stated. Such storms can be harmful to astronauts in orbit and have the potential to knock out communication satellites, disrupt mobile phone networks and damage power lines. With the public's help, Solar Stormwatch will let solar scientists better understand these potentially dangerous storms and help to forecast their arrival time at Earth, the group said. Solar Stormwatch is part of the Zooniverse network of projects. The first Zooniverse project, Galaxy Zoo, involved more than 250,000 people in classifying galaxies for a team of astronomers. NASA is set to ramp up its coverage of the Sun too. The space agency's recently launched Solar Dynamics Observatory will deliver high resolution images of the Sun ten times better than the average High-Definition television to help scientists understand more about the Sun and its disruptive influence on services like communications systems on Earth. Specifically, NASA says the SDO will beam back 150 million bits of data per second, 24 hours a day, seven days a week. That's almost 50 times more science data than any other mission in NASA history. The Defense Advanced Research Projects Agency (DARPA) recently got in on the space crowd sourcing act too. The military research outfit is looking to groups "numbering in the hundreds, to thousands, to possibly millions of people worldwide" to develop what it calls crowd sourcing algorithms to discover new applications for its diminutive Synchronized Position, Hold, Engage, and Reorient Experimental Satellites (SPHERES) that operate inside the International Space Station. And of your application is accepted, DARPA says it might even name one of the SPERES satellites after your group. Layer 8 in a box Check out these other hot stories:
<urn:uuid:2a90a386-74a6-47ce-88f5-1856bdeecf3f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229929/security/space-scientists-want-you-to-spot-and-track-solar-storms.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933056
613
3
3
Definition: When the execution time of a computation, m(n), is no more than a polynomial function of the problem size, n. More formally m(n) = O(nk) where k is a constant. Specialization (... is a kind of me.) sublinear time algorithm. See also NP, exponential, logarithmic. Note: Broadly speaking, polynomial time algorithms are reasonable to compute. The time to run exponential algorithms grows too fast to expect to be able to compute exact solutions in all cases. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 13 August 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "polynomial time", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 13 August 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/polynomialtm.html
<urn:uuid:c4d6572e-a2bd-4bfa-b9d6-90ecf59f2d20>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/polynomialtm.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.832723
234
2.671875
3
CORE focuses on foodborne illness detection Monday, Aug 19th 2013 In cold storage, it's important to have temperature monitoring to keep food products in the appropriate conditions to stave off potential foodborne illnesses. Bacteria is always present with various items, however, maintaining a proper freezing environment and cooking the products thoroughly can help prevent the diseases. While there are several bacterial strains that routinely receive headlines, information about current outbreaks can be difficult to relay to the public due to the unknown nature of many cases. Many illnesses go unreported or the results don't reveal a single food product that was the cause, according to Food Safety News. Most outbreaks that have food recalled have been tested and verified, however, the work potentially expanded across multiple states and numerous weeks to get the information. The FDA has the Coordinated Outbreak Response and Evaluation Network (CORE) which focuses solely on solving illnesses and is made up of Signals, Response and Post-Response teams to accurately and efficiently deal with the diseases. "It's seasonal for us, as outbreaks tend to be," said Ashley Grant, an epidemiologist for CORE's Signals team. "Right now, we are in the peak of our season, so we probably have about eight to 10 on our plate at any given time in summer and spring months. As we get into the fall and winter, we probably have about five a week." The situation becomes more complicated when there is lag time in receiving information and dwindling public health resources. A product's short shelf life can also make it increasingly difficult to get a sample to verify the cause, according to Food Safety News. While the process is challenging, CORE is dedicated to tracking down the source all the way to the specific harmful ingredient, if possible. Common outbreaks can be prevented There are numerous bacterial strands that can be harmless when the food is properly stored and cooked and environmental control systems are used. Some of the most common foodborne illnesses according to experts include: - E. Coli While some of the items on the list can come from other sources as well as food, it's still important to understand how to prepare meals in order to ensure that people are not harmed from the potential diseases. One of the most common ways to contract the illness is cross-contamination. Using the same tools to prepare two different foods, specifically raw meat, without washing and sanitizing the utensils can transmit the bacteria from the meat to any other product, according to The University of Rhode Island. Mixing raw and cooked foods or cooking the foot inadequately is also a major cause of foodborne illness. Ensuring that the products are made appropriately will reduce the bacteria to safe levels for consumption. Hygiene is also important to observe. Infection can be spread through touching food without washing hands and can cause foodborne illness in everyone who eats it. Foodborne illness can cause panic across communities when the source is still unknown. However, with CORE's concentrated effort, there is a possibility that more outbreaks will be revealed in a quicker manner than before. Even if outbreaks aren't apparent, properly preparing food will ensure that the amount of cases are significantly reduced.
<urn:uuid:b1c4c17e-b69c-40a4-90f7-adc776c69e0f>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/core-focuses-on-foodborne-illness-detection-493092
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964843
645
3.140625
3
In the wake of a sharp rise in cybersecurity concerns, Google is boasting new security measures and encouraging user awareness about online threats and conscientious web-browsing. Google’s Safe Browsing Technology has been curbing traffic to unsafe sites and educating its 1 billion users about potential security threats since 2006. Statistics into Safe Browsing Technology’s impact have been unpublished for years, but Google is now incorporating this data into their bi-annual reports for public access. Computerworld’s recent article offers more insight into Google’s updated transparency report. The article addresses the predominant online culprits to which Google is curbing traffic: nefarious sites hosting malware and phishing scams. Currently, Google’s Safe Browsing efforts are detecting and redirecting users away from 10,000 newly discovered unsafe URLs each day. Google’s security efforts extend further as Chrome developers unveiled Enhanced Item Validation for the Chrome Web Store last Wednesday. The update serves as a security net which will scan newly published items before they become widely available in Google’s online marketplace. Most safe items will be published within a matter of minutes, and even in extremely rare instances, all content should be available in the store within an hour. The Chrome Web Store is a popular vendor, but it isn’t the only place users can access unsafe or unwanted apps. Although Google Apps administrators can prevent the installation of applications from the Chrome Web Store, they cannot proactively monitor other sources of 3rd party apps, like the Apple (iOS) App Store, Android Marketplace or apps downloaded from public websites. CloudLock’s Apps Firewall bridges this gap in control, allowing administrators to scan for and detect all 3rd party applications that have been granted access to the domain via end-user installations. Applications that are deemed acceptable can be classified as “Trusted,” and those which are undesirable can be “Banned” and revoked if necessary.
<urn:uuid:ff142361-629a-4de7-8745-ae118583fd62>
CC-MAIN-2017-04
https://www.cloudlock.com/blog/googles-safe-browsing-technology-and-enhanced-item-validation-enforce-additional-safety-measures-for-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00416-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914339
404
2.515625
3
Getting rid of carbon dioxide (a contributor to the greenhouse effect) may be as simple as extracting it directly from the air. Funded by the U.S. Department of Energy, researchers from Georgia Tech are studying the economic feasibility of such a technique, the university reported Tuesday, July 24. Possible applications include supplying energy for fuel production or industrial applications by extracting carbon dioxide from algae, or providing a better method for oil recovery. The method could also be used to supplement the capture of emissions from power plant flues. Georgia Tech researchers found that a removal unit about the size of an ocean shipping container could extract about 1,000 tons of gas yearly with an operating cost of about $100 per ton. "Even if we removed CO2 [carbon dioxide] from all the flue gas, we'd still only get a portion of the carbon dioxide emitted each year," said David Sholl, a professor in Georgia Tech's School of Chemical & Biomolecular Engineering. "If we want to make deep cuts in emissions, we'll have to do more — and air capture is one option for doing that." For in-depth reporting on this new technology, read the full article at Georgia Tech’s website.
<urn:uuid:b6cc5acb-6a77-4c85-8dc3-2ae6255b1820>
CC-MAIN-2017-04
http://www.govtech.com/technology/Extracting-Carbon-Dioxide-Air.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00289-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950611
250
3.578125
4
Configuration Management Database (CMDB) Configuration Management Database (CMDB) is a centralized repository that stores relevant information about all the significant entities in your IT environment. The entities, termed as Configuration Items (CIs) constitutes of Hardware, the installed Software Applications, Documents, Business Services and People that are part of your IT system. Unlike the asset database that comprises of a bunch of CIs, the CMDB is designed to support a vast IT structure where the interrelation between the CIs are maintained and supported successfully. Each CI within the CMDB is grouped under specific CI Types, and is represented with Attributes and Relationships. Attributes are data elements which describe the characteristics of CIs under a CI Type. For instance, the attributes for CI Type Server would be Model, Service Tag, Processor Name and so on. Relationships denote the link between two CIs that identifies the dependency or connection between them. The CMDB in ServiceDesk Plus, keeps a track of all the pending requests, problems and change raised for the CI Type - Assets, Business and IT Services. Any impact cause by the malfunctioning of these CIs on other CIs can be identified using the Relationship Map, and specific measures can be adapted to minimize the effect. The CMDB, thus, functions as an effective decision making tool by playing a critical role in Impact Analysis and Root Cause Determination.
<urn:uuid:eb5e3410-dc58-4981-b3c0-ef3e5717cca5>
CC-MAIN-2017-04
https://www.manageengine.com/products/service-desk/help/adminguide/cmdb/configuration_management_database.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00105-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909024
279
2.640625
3
Grid Computing: The Basics Grid computing—multiple heterogeneous systems seamlessly integrated as a single system—are now a business reality. Grid computing enables the virtualization of distributed computing and data resources such as processing, network bandwidth and storage capacity to create a single system image, granting users and applications seamless access to vast IT capabilities. Just as an Internet user views a unified instance of content via the Web, a grid user essentially sees a single, large virtual computer. At its core, grid computing is based on an open set of standards and protocols—e.g., Open Grid Services Architecture (OGSA)—that enable communication across heterogeneous, geographically dispersed environments. With grid computing, organizations can optimize computing and data resources, pool them for large capacity workloads, share them across networks and enable collaboration. Learn more about the benefits of grid computing at IBM's Grid computing center.
<urn:uuid:e3adf1c5-a434-4a33-8042-35cf75209f4b>
CC-MAIN-2017-04
https://esj.com/articles/2003/12/01/grid-computing-the-basics.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00499-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886902
178
3.609375
4
Creating An Acceptable Use Policy An Acceptable Use Policy (AUP) is a written document agreed to by everyone sharing a computer network. It defines the intended uses of the network, including unacceptable uses and the consequences for violating the agreement. Mike Theriault, President and CEO of B2B Computer Products LLC,lays down some basic definitions and steps to get you started: Although it may be necessary to include some legal terminology in the document, the best AUPs are written in clear terms that everyone can understand. Before you start drafting the AUP, give notice to everyone affected that policy creation or revision is underway and establish a contact point for collecting feedback. Then decide on the purpose of your AUP. Will it only set general guidelines and expectations? Or will it be a legally enforceable document? This will have a strong bearing on the tone and wording. Begin the document with your company’s code of conduct, if you have one. Otherwise, develop a paragraph that sums up your company’s operational ethics. Most companies will add to their AUP as issues arise, but the following key areas are good places to begin.
<urn:uuid:45022bd1-6f8c-4fec-b32a-1d559d54a41b>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/IT-Management/Creating-An-Acceptable-Use-Policy-644003
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928557
234
2.59375
3
To download this resource, please fill in your details below Digital Citizenship: A Holistic Primer defines digital citizenship, describes its important role in education today and in the future, and offers specific examples of schools and colleges that are teaching it well. The paper breaks down digital citizenship into three core themes – respect, educate and protect, and provides six elements for each such as digital law, digital literacy and digital security. It discusses how schools can approach the issue of Internet access such as monitoring, filtering or blocking Internet use and handling of BYOD initiatives. It offers guidance on what schools should consider when creating a digital citizenship plan and lists classroom examples from educators in elementary, middle and high school and college. A Digital Citizenship: A Holistic Primer is a collaborative effort between the Digital Citizenship Institute, which hosts the Digital Citizenship Summit, and Impero Software.
<urn:uuid:854abb6a-6a04-48a3-a48f-efabb73115ac>
CC-MAIN-2017-04
https://www.imperosoftware.com/whitepapers/digital-citizenship-a-holistic-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00159-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929229
173
3.53125
4
What is a Hard Drive Almost all desktop computers have a hard drive inside them, but do you really know what they are? Many people when they hear the word hard drive, think that it refers to the computer as a whole. In reality, though, the hard drive is just one of many different pieces that comprise a computer. The hard drive is one of the most important parts of your computer because it is used as a long-term storage space for your data. What that means, is regardless of whether or not the computer is on, or you lose power, the data will still stay stored on this drive keeping it safe. On the other hand, it tells you how important backing up your data is, because if the hard drive malfunctions there is a good chance you will lose it all. A hard drive is an integral piece of equipment for your computer as your operating system and all your data are typically stored there. In the majority of the situations, if you did not have a working hard drive, or the hard drive malfunctions, you would not be able to boot your computer into the operating system and would get an error. If you opened your computer case and wanted to find your hard drive, it would look similar to the image below: Image of a Hard Drive How hard drives work If you were to open your hard drive, which would immediately void your warranty and potentially damage it, you would see something like the image below: Inside a Hard Drive A hard drive consists of the following: When a the computer wants to retrieve data off of the hard drive, the motor will spin up the platters and the arm will move itself to the appropriate position above the platter where the data is stored. The heads on the arm will detect the magnetic bits on the platters and convert them into the appropriate data that can be used by the computer. Conversely, when data is sent to the drive, the heads will this time, send magnetic pulses at the platters changing the magnetic properties of the platter, and thus storing your information. It is important to note, that since the data stored on your hard drive is magnetic, it is not a good idea to play with a magnet near your hard drive :) Hard Drive Interfaces A hard drive connects to your computer through a specific type of interface. The interface on your hard drive must match the corresponding interface on your motherboard. If you purchase a new hard drive that has a interface that your motherboard does not support, it will not work in your computer. Currently there are three interfaces that have become the standard for connecting your hard to your computer. Some information about each of these interfaces are below. When buying a hard drive When purchasing a hard drive there are some characteristics you want to keep in mind that will help you determine the right drive for your needs. These characteristics are: If you have any questions about this tutorial or about hard drives, feel free to post them in the computer help forums. Bleeping Computer: Hardware Concepts Tutorial BleepingComputer.com: Computer Help & Tutorials for the beginning computer user. When a hard drive is installed in a computer, it must be partitioned before you can format and use it. Partitioning a drive is when you divide the total storage of a drive into different pieces. These pieces are called partitions. Once a partition is created, it can then be formatted so that it can be used on a computer. When partitions are made, you specify the total amount of storage that you ... In order to use a hard drive, or a portion of a hard drive, in Windows you need to first partition it and then format it. This process will then assign a drive letter to the partition allowing you to access it in order to use it to store and retrieve data. Almost everyone uses a computer daily, but many don't know how a computer works or all the different individual pieces that make it up. In fact, many people erroneously look at a computer and call it a CPU or a hard drive, when in fact these are just two parts of a computer. When these individual components are connected together they create a complete and working device with an all ... A filesystem is a way that an operating system organizes files on a disk. These filesystems come in many different flavors depending on your specific needs. For Windows, you have the NTFS, FAT, FAT16, or FAT32 filesystems. For Macintosh, you have the HFS filesystem and for Linux you have more filesystems than we can list in this tutorial. One of the great things about Linux is that you have the ... This tutorial is intended to explain what RAM is and give some background on different memory technologies in order to help you identify the RAM in your PC. It will also discuss RAM speed and timing parameters to help you understand the specifications often quoted on vendors' websites. Its final aim is to assist you in upgrading your system by suggesting some tools and strategies to help you ...
<urn:uuid:6901f193-8e7f-44e0-a2bc-2d30533d4dde>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/how-hard-drives-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945624
1,016
3.59375
4
OpenFlow 1.3 Updates SDN Protocol OpenFlow is a Layer 2 networking protocol sponsored by the Open Networking Foundation that allows for the separation of the control plane and the forwarding data plane in Ethernet architectures. Currently, these two planes are located in the same device − the control plane maintaining network information and routing tables used to maintain connectivity and the data plane providing the interface for incoming and outgoing packets. This requires all networking devices to have access to the same tables and related information, which is usually updated manually as changes to the network are implemented. Under OpenFlow, the data plane remains in the switch, but the control plane is placed on a separate controller, with OpenFlow enabling communication between the two. By separating control and data forwarding, network configuration settings and updates can take place in software, which opens up the possibility of embedding network information and requirements on the application level. This allows networks to be configured and reconfigured on the fly, with little or no direct involvement on the hardware level, essentially providing virtual, abstract networking environment. This is commonly known as Software Defined Networking (SDN). OpenFlow 1.3 is the latest update to the OpenFlow protocol. It describes the various port configurations, channel types and flow tables,a s well as the relationships between these elements, to be used in OpenFlow-compatible switches. Every OpenFlow switch contains a number of flow and group tables used in the packet forwarding process, as well a communications channel to an external controller. In this way, the controller is able to supply the switch with updates to flow tables and other pertinent information needed to maintain network pathways. OpenFlow tables support two kinds of pipeline processing between tables: OpenFlow-only and OpenFlow-hybrid, used for single-protocol and mixed environments respectively. OF 1.3 also provides information on the port configurations needed to connect switches to each other. The protocol supports three types of ports: physical, logical and reserved. Physical ports correspond directly to a hardware interface, while logical ports exist on the higher, abstracted plane. Reserved ports handle generic forwarding instructions to non-OpenFlow systems. OF 1.3 includes a number of additions to the previous 1.2 spec. These include: - Support for IPV6, that allows controllers to implement new routing deployment configurations and Request for Comment (RFC) specifications - Tunneling and logical port abstractions that can be used in datacenter, Virtual Private Network (VPN) and other deployments - Provider Backbone Bridging (PBB), which provides a lightweight tunneling method for datacenter-to-datacenter connectivity - Enhanced per-flow metering and per-connection filtering techniques designed to improve data flow, bandwidth management and QoS The market for OpenFlow-compatible switches is small, but growing , although no one has released an ONF 1.3 product yet. On the software side, there are a number of Reference Linux builds, which provide an easily configured, although somewhat slow implementation, as well as the 4 Gb NetFPGA targeted mainly at research and educational applications. The Open vSwitch provides for multilayer switching under the open source Apache 2 license and is considered the front-runner for enterprise environments due to its support for leading virtualization platforms like XenServer, KVM and VirtualBox. As well the OpenWRT system provides a means to link OpenFlow to wireless routers and access points, also a key benefit to the enterprise. In hardware, options include the HP ProCurve 5400zl, featuring up to 48 1 Gb ports under ONF 0.8.9. HP recently announced that it will support OpenFlow on more than a dozen switches in the 3500, 4500 and 8200 families. NEC PF5240 supports ONF 1.0 and also provides 48 1 Gbps ports, as well as a pair of 10 GbEs. Stanford University has also made available several versions of its Pronto switch, providing one OpenFlow instance per switch under the school's own reference design. And IBM recently released the G824 switch, which provides 48 10 GbE ports and four 40 GbE ports, and is capable of operating in traditional L2/L3 or OpenFlow modes. The Open Networking Foundation expects ONF 1.3 to be the last major revision to the protocol for the next year, at least. The organizations says that, feature-wise, the system is fairly up to date, so the focus will shift away from development and more toward implementation in 2013. For the enterprise, OpenFlow not only ushers in a new era of dynamic network configuration, but introduces a level of network customization that should allow organizations to optimize both hardware and software to their specific needs. With the toolsets available in ONF 1.3, generic physical and virtual switch environments can easily be configured in-house for a wide array of network and data requirements. The full spec can be viewed here.
<urn:uuid:98bcdbbe-f6c2-408c-b892-c638890b6864>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/datacenter/openflow-1.3-updates-sdn-protocol.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00215-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910356
1,015
2.609375
3
The OADM, short for optical add drop multiplexer, is a gateway into and out of a single mode fiber. In practice, most signals pass through the device, but some would be “dropped” by splitting them from the line. Signals originating at that point can be “added” into the line and directed to another destination. An OADM may be considered to be a specific type of optical cross-connect, widely used in wavelength division multiplexing systems for multiplexing and routing fiber optic signals. They selectively add and drop individual or sets of wavelength channels from a dense wavelength division multiplexing (DWDM) multi-channel stream. OADMs are used to cost effectively access part of the bandwidth in the optical domain being passed through the in-line amplifiers with the minimum amount of electronics. OADMs have passive and active modes depending on the wavelength. In passive OADM, the add and drop wavelengths are fixed beforehand while in dynamic mode, OADM can be set to any wavelength after installation. Passive OADM uses WDM Filters, fiber gratings, and planar waveguides in networks with WDM systems. The active mode, we call Dynamic OADM, can select any wavelength by provisioning on demand without changing its physical configuration. It is also less expensive and more flexible than passive OADM. Dynamic OADM is separated into two generations. A typical OADM consists of three stages: an optical demultiplexer, an optical multiplexer, and between them a method of reconfiguring the paths between the optical demultiplexer, the optical multiplexer and a set of ports for adding and dropping signals. The optical demultiplexer separates wavelengths in an input fiber onto ports. The reconfiguration can be achieved by a cross connect patch panel or by optical switches which direct the wavelengths to the optical multiplexer or to drop ports. The optical multiplexer multiplexes the wavelength channels that are to continue on from demultipexer ports with those from the add ports, onto a single output fiber. Physically, there are several ways to realize an OADM. There are a variety of demultiplexer and multiplexer technologies including thin film filters, fiber Bragg gratings with optical circulators, free space grating devices and integrated planar arrayed waveguide gratings. The switching or reconfiguration functions range from the manual fiber patch panel to a variety of switching technologies including microelectromechanical systems (MEMS), liquid crystal and thermo-optic switches in planar waveguide circuits. CWDM and DWDM OADM provide data access for intermediate network devices along a shared optical media network path. Regardless of the network topology, OADM access points allow design flexibility to communicate to locations along the fiber path. CWDM OADM provides the ability to add or drop a single wavelength or multi-wavelengths from a fully multiplexed optical signal. This permits intermediate locations between remote sites to access the common, point-to-point fiber message linking them. Wavelengths not dropped, pass-through the OADM and keep on in the direction of the remote site. Additional selected wavelengths can be added or dropped by successive OADMS as needed.
<urn:uuid:8654f2b9-98c7-4361-9d5a-f87627b84fa3>
CC-MAIN-2017-04
http://www.fs.com/blog/optical-add-drop-multiplexer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912502
690
3.25
3
“Big data” is quickly making its way into conversations in government. However, it is difficult for government officials to identify what big data means for their own organizations. What are the challenges? How can they take on something new that does not necessarily connect to their core mission? And not least, how should they tackle the issue to respond to requests from the public? The big data discussion hits government in two ways: First, big data is created by citizens in their daily online interactions using social media either directly with government or talking amongst themselves about issues related to government. As the recently released first guidance for social media metrics for federal agencies shows, government is just now getting into the groove of measuring, interpreting and acting on insights they can potentially gain from their interactions with citizens. The second trend centers around open government and the launch of the federal data-sharing site data.gov, a public website that hosts hundreds of data sets produced by federal agencies. Originally, the big data discussion started outside of government, but has direct implications for government as more and more agencies, politicians and citizens are using social media to interact with each other. Social networking platforms, such as Facebook or Twitter, allow citizens to directly connect to government agencies and share their immediate sentiments via comments in their own newsfeed. In doing so, they create hundreds of new data points. Conversations between government and citizens, and also among citizens, might not even directly involve government, but could be related to ongoing hot-button issues, upcoming policy changes or cuts in government programs. Keeping track of potentially thousands of externally created data points published by citizens on a daily basis has become an unmanageable problem that is slowly being addressed in the public sector. Some agencies have shut down the option to leave comments on their Facebook pages, thus reducing the cost to respond and track that data. Others actively pull in citizen input or moved on to other platforms that focus the conversation on a specific problem and streamline the solicitation of targeted responses and input from the public (see, for example, Challenge.gov). The second trend that government agencies are facing is the mandate of the Open Government Initiative to release government data sets in machine-readable format for public consumption. The flagship initiative data.gov has paved the way for state and local governments to respond in a similar fashion. Most recently, New York State has released its own data portal, a website that hosts about 6,500 datasets from state, city and local government sources in the state. The challenge for public managers is manifold: they have to identify appropriate data sets, clean them, potentially merge them from different databases, and make sure that they do not contain any information that cannot legally be released to the public. To meet these requirements, agencies need additional resources and staff with appropriate skill sets. Beyond the internal organizational challenges, agencies also need to understand how they can open themselves up for third parties who are reusing the data. Mark Headd, the newly appointed first chief data officer of Philadelphia, recently spoke to my social media class at the Maxwell School and shared his insights into the world of big data in government. Mr. Headd was appointed through an executive order of Philadelphia Mayor Michael Nutter. He reports directly to the city’s chief information officer and mayor, who made it a political priority to understand and organically implement elements of the open government movement -- an advantage other cities might not have. Headd describes himself as a data evangelist and an embedded technologist who has the task to discover government data, think about ways to make it available to the public and find a match between the data and external stakeholders who can potentially use the data to create public value. Internally, he is focused on cultural change more than on data analysis issues or technological problems: He aims to convince managers of the potential value their data can have for the public, and to inform them about citizens’ changing expectations. Mr. Headd then facilitates connections between data sources and potential data users outside of government. Creating a Data Ecosystem As one of the first Code for America cities in the United States, Philadelphia’s local tech community of civic hackers is motivated to reuse public information and create valuable applications. As opposed to data.gov, where data sets are mostly available for so-called “elite access” -- a small group of highly trained computer specialists and data analysts -- the approach in Philadelphia focuses on data that is not highly specialized and is already publicly available, such as transit data, day care centers, information about flu shot locations, etc. Most people will consume the existing data through web browsers, either on their desktops or mobile phones. Headd describes Philadelphia’s approach to open data as a focus on the “last mile.” By that he means that the city invites civic hackers who recombine the existing disconnected datasets in a mindful way to go beyond mere display of data sets, as it is done on data.gov. The city wants to collaboratively build new mobile phone applications by recombining data. Events such as “Code for Philly” promote collaboration with the local tech community to use data and build new projects that have the potential to create something of value for the public. Philadelphia, much like Boston, Baltimore, and New York City, has a very active civic technology community with programmers who are passionate about their city. Headd’s goal is to capitalize on that passion. One example of Headd’s success are applications such as CityGoRound.org, a clearing house for applications built around transit data that help users catch trains. The application and code are made available for reuse in other cities by simply plugging in local transit data. Transit authorities agreed to a standard that makes it easy to share existing applications. As a result, the city and its technology stakeholders are collaboratively building an entire ecosystem around government data from which all cities can benefit. One of the challenges is convincing citizens to reuse the data and make use of the applications, Headd says. One approach Philadelphia has chosen is to advertise the newly created third-party products on public buses (see for example SEPTAlking). However, the question of endorsing and publicly sponsoring products that were built outside of government is still an unresolved issue. Another challenge is changing the bureaucratic culture. For Headd, the solution is to lead a conversation about the effectiveness and efficiency of the current use of government data. He shows public managers how they can reduce inefficiencies in responding to a steady stream of Freedom of Information Act requests to release data to citizens or journalists. Requests can be burdensome and labor-intensive to research. Headd works with public managers to identify the five most common data requests, collaboratively release the data and reduce the administrative burden. Employees can simply point requestors to the publicly available dataset and save time and resources. For example, the Department of Licenses and Inspections receives multiple requests to release data about the number of locations of vacant houses as well as code violations. By releasing the data on a public website, the city allowed developers to create mobile applications, thereby significantly reducing the number of written requests and phone calls. The research activities for similar types of requests are minimized by simply pointing requestors to the new app. Government staff can then focus their attention on the core mission, instead of being distracted by FOIA requests. Hackathon events have also enabled efficiencies by allowing civic hackers to build a service on top of government data sets -- they are effectively helping themselves instead of having to reach out to government for help. Headd shared a few insights on how other chief data officers can tackle issues in their own cities: “Nobody wants to be first, so point people to other success stories in other agencies,” he said. He is continually evangelizing about the value of big data, promoting developments elsewhere, which helps people understand the benefits of releasing data. He suggests showing public managers tangible benefits instead of talking about openness or accountability, which can be very difficult to quantify, especially in budget-driven conversations. The big data applications Headd sees are limitless: Budgets, spending, crime or transit data enable people to see how well city employees are doing their jobs and help educate the public about improvements in services. Most of the new coverage government receives is unfortunately focused on things that go wrong -- big data can change the focus. Lastly, social media and government data can come together to create more personalized connections with citizens. Philadelphia has identified about 40 processes, such as voting, in which engagement is low and new experiments to increase feedback are needed. The city recently launched an application to pull citizen opinions into the policy-making processes: Textizen.com allows users to send in their feedback by cellphone -- without needing an expensive smart phone to actively participate in the policy-making process. By institutionalizing easy to use tools to which most have access, tools like Textizen can become part of a government’s future planning process and can automatically generate input without hosting town hall meetings, which attract a limited number of participants. The example of Philadelphia’s success is certainly an outlier: The city is blessed with a unique combination of advantages that other local governments might not have: - A mandate to reuse public information, - A technologist who understands managerial, technological and cultural issues in government, and - A unique tech community with a passion for the city and an interest in innovation. However, all cities around the United States are invited to simply reuse existing applications without reinventing the wheel on a daily basis. Get going with resources that are already freely available and dive into the future of big data in government.
<urn:uuid:5c98a3b1-f374-4bc4-a9f0-b6a53fa138cd>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/tech-insider/2013/03/follow-phillys-lead-and-dive-big-data-future/62108/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949104
1,961
2.71875
3
The Department of Defense (DoD) is funding research to create a cloud computing environment that can heal itself after a cyber attack. Researchers at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (CSAIL) are working on a new system that would help a cloud identify an attack and recover from it almost instantaneously, according to MIT. The work is part of the Defense Advanced Research Project Agency (DARPA) Mission-oriented Resilient Clouds (MRC) project, which aims to create a cloud network, the resiliency of which is based on its ability to adapt. Generally if a network is attacked, the entire system that’s been infiltrated shuts down regardless of which system–PC, website, or server, for instance–the attack targeted. MIT researchers at the Center for Resilient Software at CSAIL are trying to develop a system that can tell when something is amiss with a network and defend against it as soon as it happens. via InformationWeek Government, continued here.
<urn:uuid:fe973c49-03d7-4844-8607-1a266330f7e9>
CC-MAIN-2017-04
http://www.fedcyber.com/2012/02/28/darpa-mit-research-a-self-healing-cloud/?shared=email&msg=fail
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952716
212
2.859375
3
Security is a major part of the foundation of any system that is not totally cut off from other machines and users. Some aspects of security have a place even on isolated machines. Examples are periodic system backups, BIOS or power-on passwords, and self-locking screensavers. A system that is connected to the outside world requires other mechanisms to secure it: tools to check files (tripwire), audit tools (tiger/cops), secure access methods (kerberos/ssh), services that monitor logs and machine states (swatch/watcher), packet-filtering and routing tools (ipfwadm/iptables/ipchains), and more. System security has many dimensions. The security of your system as a whole depends on the security of individual components, such as your e-mail, files, network, login and remote access policies, as well as the physical security of the host itself. These dimensions frequently overlap, and their borders are not always static or clear. For instance, e-mail security is affected by the security of files and your network. If the medium (the network) over which you send and receive your e-mail is not secure, you must take extra steps to ensure the security of your messages. If you save your secure e-mail into a file on your local system, you rely on the filesystem and host access policies for file security. A failure in any one of these areas can start a domino effect, diminishing reliability and integrity in other areas and potentially compromising system security as a whole. This short appendix cannot cover all the facets of system security but does provide an overview of the complexity of setting up and maintaining a secure system. This appendix provides some specifics, concepts, guidelines to consider, and many pointers to security resources. Other Sources of System Security Information Depending on how important system security is to you, you may want to purchase one or more of the books dedicated to system security, read from some of the Internet sites that are dedicated to security, or hire someone who is an expert in the field. Do not rely on this appendix as your sole source of information on system security. One of the building blocks of security is encryption, which provides a means of scrambling data for secure transmission to other parties. In cryptographic terms, the data or message to be encrypted is referred to as plaintext, and the resulting encrypted block of text as ciphertext. A number of processes exist for converting plaintext into ciphertext through the use of keys, which are essentially random numbers of a specified length used to lock and unlock data. This conversion is achieved by applying the keys to the plaintext by following a set of mathematical instructions, referred to as the encryption algorithm. Developing and analyzing strong encryption software is extremely difficult. There are many nuances and standards governing encryption algorithms, and a background in mathematics is requisite. Also, unless an algorithm has undergone public scrutiny for a significant period of time, it is generally not considered secure; it is often impossible to know that an algorithm is completely secure but possible to know that one is not secure. Time is the best test of an algorithm. Also, a solid algorithm does not guarantee an effective encryption mechanism, as the fallibility of an encryption scheme frequently lies in problems with implementation and distribution. An encryption algorithm uses a key that is a certain number of bits long. Each bit you add to the length of a key effectively doubles the key space (the number of combinations allowed by the number of bits in the key-2 to the power of the length of the key in bits) [a 2-bit key would have a key space of 4 (2^2), a 3-bit key would have a key space of 8 (2^3), and so on.] and means that it will take twice as long for an attacker to decrypt your message (assuming that there are no inherent weaknesses or vulnerabilities to exploit in the scheme). However, it is a mistake to compare algorithms based only on the number of bits used. An algorithm that uses a 64-bit key can be more secure than an algorithm that uses a 128-bit key. The two primary classifications of encryption schemes are public key encryption and symmetric key encryption. Public key encryption, also called asymmetric encryption, uses two keys: a public key and a private key; these keys are uniquely associated with a specific individual user. Symmetric key encryption, also called symmetric encryption, or secret key encryption, uses one key that you and the person you are communicating with (hereafter, referred to as your friend ) share as a secret. Public key algorithm keys typically have a length of 512 bits to 2,048 bits, whereas symmetric key algorithms use keys in the range of 64 bits to 512 bits. When you are choosing an encryption scheme, realize that security comes at a price. There is usually a trade-off between resilience of the cryptosystem and ease of administration. Hard to Break? Hard to Use! The more difficult an algorithm is to crack, the more difficult it is to maintain and to get people to use properly. The paramount limitations of most respectable cryptosystems lie not in weak algorithms but rather in users’ failure to transmit and store keys in a secure manner. The practicality of a security solution is a far greater factor in encryption, and in security in general, than most people realize. With enough time and effort, nearly every algorithm can be broken. In fact, you can often unearth the mathematical instructions for a widely used algorithm by flipping through a cryptography book, reviewing a vendor’s product specifications, or performing a quick search on the Internet. The challenge is to ensure that the effort required to follow the twists and turns taken by an encryption algorithm and its resulting encryption solution outweighs the worth of the information it is protecting. How Much Time and Money Should You Spend on Encryption? When the cost of obtaining the information exceeds the value realized by its possession, the solution is an effective one.
<urn:uuid:9643dccd-a30d-4f88-8622-250ed747a866>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2003/04/17/linux-security-kinds-of-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942809
1,221
3.125
3
LIMA, Peru (AP) -- When a fall down a flight of stairs left Jose Juape's beloved grandmother unable to walk and the family couldn't afford a wheelchair, he set out to build her a remote-control robot. As the 80-year-old matriarch sat motionless on a ragged sofa in their home abutting a dusty street market in the Lima shantytown of Comas, Juape, then 14, rooted through junkyards for parts. He scrounged old air conditioner motors, pulleys, wires and an ancient car battery to power the robot. After more than a year of experimenting, Juape, who knows little math or science, built "The Condor," a man-size robot covered in tin foil that can roll about the house on wheels, pick up objects with pincer-like hands and turn its head. It is operated by a hand-held control panel. "If you can't buy something in Peru, you make it yourself," said Juape, whose next project is to make an electricity generator from scrounged magnets. There's one hitch, and it's human rather than technological. Juape's grandmother is afraid of the robot, calling it "the dead soul." It was unclear if Juape had tried to just make her a wheelchair instead. Peru's sprawling shantytowns are hotbeds of creativity. Spurred by desperate poverty, their inhabitants are inventing their own machines and building everything from toys to car parts from junk. A national inventors contest attracted hundreds of entries, including a snail-shaped concert harp, acne cream made from tree bark, socks that give a massage and a combination bed, chair and urinal. Many of the best inventions came from the poor, said Isaias Flit, a lawyer with Peru's copyright office, Indecopi, which organized the event. "There is a phenomenal creativity among Peruvians, especially the poor, who trace their roots back to the rich Inca or pre-Inca Indian cultures," said Luis Herrera, a psychoanalyst who studies Peru's poor. "This creativity emerges in times of crisis." During his studies of shantytown psychology, Herrera found blocks of mechanics who had never read a manual but could rebuild cars from scrap and street markets filled with rejigged appliances. He also found forgers who could expertly fake everything from money to medical degrees. These self-taught skills have led to inventions, few of which are patented, said inventor Passaro, who lives in a tough part of downtown Lima. Passaro has invented more than a dozen items, including special mural-painting brushes and giant hand puppets operated by piano-like keys. He built a swimming aid to help his wife learn to swim, and the hand puppets so that the school where his wife taught could put on a show even though it had no money. Reprinted with permission of The Associated Press. Random Access | Brian McDonough Survivalists prepare for and "starvation" PORTLAND, Maine (AP) -- While government CIOs nationwide search for solutions -- and funding -- for the Y2K problem, some citizens are heading for the hills. Or the forests. Or anywhere modern technology isn't. "I'm sure you understand what could happen with civil unrest," Jim Apalsch told the Maine Sunday Telegram in November. "You could understand what would happen if people were starving and their families were starving." In the unofficial race to be the first to panic over the millennium bug, Apalsch and his family were spending autumn in Caribou, Maine, while he scours rural areas for a home with some acreage and few neighbors. The former Milwaukee resident plans to establish a self-sufficient homestead equipped with solar and wind power, a garden and a greenhouse in which to weather the coming electronic apocalypse. "Y2K is going to affect the economy and the food distribution channels. It could lead to financial collapse," he prophesied. "It's smart to me to think ahead about what that means." Year 2000 survivalists across the country are taking similar steps, and even buying guns, to protect their homes from the civil unrest they predict will ensue when "99" rolls over to "00." They fear that the millennium bug will cause electrical systems to crash and transportation networks to gridlock. They foresee financial collapse and rampant crime. "I've seen people getting my newsletter who are saying we should get guns and bullets," said Jim Majka of Fort Kent, Maine, editor of a newsletter on how to survive the year-2000 problem and its implications. "I, myself, am not doing that," Majka said. "If I lived in Boston, that'd be different." From The Associated Press.
<urn:uuid:e8c11c20-cdfe-4ca2-9af7-0a2f7d3167e6>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100551754.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972943
1,004
2.609375
3