text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Some people have a “green thumb” that gives them a natural knack for gardening and identifying plants, trees, succulents, and more. Many of us need a little more help – and now that assistance is available right from phones and mobile devices, thanks to artificial intelligence. Researchers have created AI apps that can help anyone identify a plant by snapping a photo of it with a smartphone camera. The apps can even make your garden grow and help your plants thrive by telling you when your green friends need more water or fertilizer. How Plant Identification Apps Work Now you can take a photo of any plant and get an immediate and highly accurate identification of the plant type and its features. Apps like PictureThis and Plantsnap can identify and help people care for thousands of plant species with artificial intelligence and machine learning technology. The apps train their deep learning models with millions of photos taken by plant lovers and botanists all over the world. To identify a plant, users snap a photo of the plant, and the app identifies it in seconds. Some of the apps not only give the genus and common name of the plant, but also information on its origin, maturity size, and any flowers or fruit associated with that plant. PlantSnap, one of the most comprehensive apps, has over 600,000 plants in its database and recognizes 90% of all known worldwide species of plants and trees. PictureThis is a similar plant identification app that also helps users detect issues and care for sick or struggling plants. Users can take a picture of the sick part of plants and get diagnosis and treatment suggestions. Fighting Invasive Species with AI Environmental scientists in the UK have been looking for ways to fight invasive plants that cause damage to natural landscapes and building foundations, but some species can be difficult to spot. Now artificial intelligence researchers are hoping to train machine learning algorithms to recognize invasive plants, so they minimize damage. Last year, researchers from the UK Centre for Ecology and Hydrology (UKCEH) launched a pilot program to collect images from cameras placed on top of vehicles, then label those images with a GPS tag and use them for training their machine learning algorithms. Vehicle-mounted cameras are a great choice for this project because many invasive species live along vegetated roadsides. One of the scientists' first tasks is to train their model to recognize Japanese knotweed, a particularly harmful plant in the UK. Once the knotweed is identified, researchers can remove it from vulnerable areas. If successful, this type of program could be adapted by other countries that are battling invasive plant species. Highly Sophisticated AI Systems The artificial intelligence technology behind the latest plant identification apps is particularly impressive because it has learned to recognize and categorize over 300,000 classes. That kind of volume requires millions of training images, particularly because so many plants look similar to each other. While the app experiences are simple and intuitive, and the results that users get back are useful and accurate, the technology happening behind the scenes is powerful. It puts the vast databases of the natural world right in the palms of our hands – which is a tremendous breakthrough for hobbyists, serious gardeners, and environmentalists alike.
<urn:uuid:e6433a15-3c94-49cd-be07-be16ed94cce4>
CC-MAIN-2022-40
https://bernardmarr.com/use-artificial-intelligence-to-identify-any-plant/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00697.warc.gz
en
0.933801
649
3.53125
4
Climate change knocks a crucial Twitter data center offline. If you wonder about the effect of extreme heat, ask Twitter; they just lost a key data center in California because of it. California’s tremendous heat wave hit one of the biggest data centers of Twitter. Data centers, which include computers, servers, and storage devices, are essential to the operation of major social media sites. Table of Contents Extreme heat makes the Twitter data center offline Extreme heat in California has forced Twitter to lose one of its critical data centers. Twitter and other big social media companies rely on data centers, which are enormous warehouses full of computers, including servers and storage systems. To prevent the computers from overheating and breaking down, controlling the temperature in those centers is essential. Some major companies are looking more and more to locate their data centers in colder regions to reduce cooling expenses. Google, for instance, constructed a data center in Finland in 2011, and Meta had a data center in northern Sweden in 2013. According to an internal letter acquired by CNN, the disruption caused Twitter’s physical infrastructure at the Sacramento facility to completely shut down. “On September 5th, Twitter experienced the loss of its Sacramento (SMF) data center region due to extreme weather. The unprecedented event resulted in the total shutdown of physical equipment in SMF. Its data centers in Atlanta and Portland remain operational, but if another one goes offline, “we may not be able to serve traffic to all Twitter’s users.”Carrie Fernandez, Twitter’s vice president of engineering As a result, Twitter is in a “non-redundant state.” Check out the 4 insights on the future of data centers The memo continues by outlawing non-essential product changes until Twitter can completely restart its Sacramento data center services. Except for those changes needed to address service continuity or other critical operational demands, all production changes, including deployments and releases to mobile platforms, are restricted, according to Fernandez. Did Twitter lose the data center really due to extreme heat? According to Twitter whistleblower Peiter Zatko, Twitter’s loss of the data center is a trick. He claims officials concealed important information concerning Twitter’s frequent data breaches and insufficient user data protection with this situation. Zatko, who oversaw security at Twitter until he was fired in January, claims that the corporation misled regulators about its attempts to safeguard Twitter. California heat wave More than 1,000 warm temperature records in the West were broken during the August 30 – September 10 California heat wave. In Sacramento, where meteorological records dating back to 1877, it set an all-time high temperature of 116 degrees Fahrenheit (46.67 °C). What is a data center? Organizations employ data centers, which are facilities made up of networked computers, storage systems, and computational infrastructure, to gather, process, store, and distribute massive amounts of data. The applications, services, and data that are housed in a data center are often significantly relied upon by businesses, making it an essential asset for daily operations. Facilities for safeguarding and protecting internal, on-site, and cloud computing resources are being incorporated into enterprise data centers more frequently. The distinction between company data centers and cloud providers is blurring as more businesses use cloud computing. Check out the pros and cons of cloud computing Nearly all of the enterprise compute, data storage, network, and business applications are supported by data centers. The data center is the business to the extent that a modern organization runs its operations on computers.
<urn:uuid:cd7aab95-fc2a-4d53-8e36-93195aef9dbb>
CC-MAIN-2022-40
https://dataconomy.com/2022/09/twitter-data-center-offline/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00697.warc.gz
en
0.924631
737
2.703125
3
Air cooling has dominated the data center cooling industry, not only of its convenience but as previously installed air conditioning units only need a little expansion to provide cool air for the IT equipment. Liquid cooling on the other side has many disastrous concerns, especially when introduced in the market. These concerns include: Liquid leaks and their effects on an electrical facility like a data center. Huge power consumption Component failure and malfunctions Humidity and condensation Massive Carbon footprint Retrofit Data Centers Into Liquid-Cooling Environmental Impact and Energy-Efficiency The growing number of data centers using liquid cooling can bring the infrastructure to the next level. Because many businesses realize the benefits of liquid cooling now, it is possible that in the next three years it will double in growth. Data centers could also use this proposition to attract customers. As many customers now are looking for sustainable and energy-efficient partner companies. The Limitations of Air-Cooling Technology Retrofit Data Centers Into Liquid Cooling Most data center equipment is designed for air cooling and is not viable for liquid cooling. However, it will be better for the data center to completely convert into liquid cooling than to redesign its entire facility. In this case, it’s best to install an air to liquid Chilldyne Cooling Distribution Unit (CDU) as a first step before retrofitting the data center. This will allow for a liquid-cooled rack of servers utilizing direct to chip cold plates to be deployed without a dedicated liquid loop taken through the data center, and there is an added benefit of the liquid being controlled in smaller, more manageable volumes. It’s a complex process to retrofit a data center into liquid cooling. However, data centers can always rely on The Green Grid, ASHRAE, and the Open Compute Project. They can assist from the compliance of construction materials to the actual implementation of the chosen liquid cooling method. Another option is the development of prefabricated modular data centers. This is where liquid cooling can be added to the facility without any reconstruction. many data centers can actually retrofit into liquid cooling right away. However, they still believe that they can be efficient with the best practices and optimal methods. How Will The Transition Progress? Of course, the transition from air cooling to liquid cooling would not happen overnight or a couple of days. There might be a transitional technology to help the industry into the inevitable change. This transition technology might just be in the form of hybrid coolers. Hybrid coolers can be more efficient than today’s air-cooled devices. However, data centers supporting hybrid coolers require at least two cooling plants. One is to cool the water that runs through the cold plates on the IT devices, and the other one is cooling the air-cooled loads in the building. It’s expensive to provide for two plants and may have an impact on reaching return on investment. However, the best advances that can be made in the industry will occur when existing air-cooled data centers have reached their physical limit for cooling, and the only option for adding more capacity comes from refreshing the air-cooled IT in the data center to water-cooled IT equipment. Even if this change comes with the addition of plants to provide the cooling water to the new water-cooled equipment, the addition of this infrastructure is still more favorable than building new data centers from the ground up. As those modifications become commonplace, the industry will see a clearer path to the purer, more leading-edge design of immersion cooling. That change will come in the next five years, it will dramatically change the industry, and we’ll check back again in ten years. Liquid Cooling Monitoring Monitoring and alarm are essential for any technology in the contemporary data center. AKCP Monitoring Solutions includes a software suite that provides monitoring, alerts, including temperatures, flow, pressures, and leak detection, and importantly can report into data center management software suites. Power Monitoring Sensor The power of the cooling unit was monitored using a power meter that can monitor and record real-time power consumption. The AKCP Power Monitor Sensor gives vital information and allows you to remotely monitor power eliminating the need for manual power audits as well as providing immediate alerts to potential problems. Power meter readings can also be used with the sensorProbe+ and AKCPro Server lives PUE calculations that analyze the efficiency of power usage in your data center. Data collected over time using the Power Monitor sensor can also be viewed using the built-in graphing tool. Wireless Pipe Pressure Monitoring The pressure in the tank was monitored by an automatic pressure relief valve with a pressure sensor. Digital pressure gauge for monitoring all kinds of liquids and gasses. Remote monitoring via the internet, alerts, and alarms when pressures are out of pre-defined parameters. Upgrade existing analog gauges. Wireless Valve Motor Control Wireless, remote monitoring, and control of motorized ball valves in your water distribution network. Check status and remotely actuate the valves. Receive alerts when valves open and close, or automate the valve based on other sensor inputs, such as pressure gauges or flow meters.
<urn:uuid:5d547ead-813b-493c-b4f4-ee8eb3379100>
CC-MAIN-2022-40
https://www.akcp.com/articles/why-liquid-cooling-is-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00697.warc.gz
en
0.912775
1,221
2.765625
3
What is XDR (Extended Detection and Response)? Extended Detection and Response, abbreviated XDR, is still a fairly new security concept. The term was coined in 2018 by the US company Palo Alto Networks and Gartner analysts, among others. In the meantime, several products exist that implement this security concept. It is a conceptual and technical approach from the security environment that enables the detection and defense against security threats across the entire IT infrastructure of a company. Unlike predecessor concepts such as EDR (Endpoint Detection and Response), XDR not only focuses on threat detection and defense at endpoints but also integrates all IT layers, applications, and devices such as servers, networks, applications, and cloud services into a uniform, transparent security management. The security of the IT infrastructure is no longer viewed only in sub-areas, but holistically. The system collects, correlates, and analyzes data from the individual components and uses the information obtained for automated or manual threat prevention measures. XDR is not a purely reactive concept, but can also take proactive action. Security dashboards are provided to security personnel for a quick overview of the security and threat situation. The goal of Extended Detection and Response is to ensure a high level of security for the IT infrastructure and prevent damage from data breaches, data loss, and other cyber threats. Systems and components involved in Extended Detection and Response XDR doesn’t just focus on endpoint security. It takes into account an organization’s entire IT infrastructure. This includes: - Virtual and physical servers - Networks and network components such as routers and switches - Applications such as email, ERP applications, databases, etc. - Cloud services and cloud workloads such as computing, storage, software, platforms, and services Differentiation between XDR and EDR Extended Detection and Response can be viewed as an extension of the EDR concept. EDR stands for Endpoint Detection and Response and focuses on the security of endpoints such as PCs or laptops. Unlike simple antivirus solutions, EDR is able to record and analyze endpoint behavior. Endpoint cybersecurity attacks and malware can be detected with EDR based on suspicious behavior. Instead of looking for malware and virus signatures, Endpoint Detection and Response identifies threats based on endpoint behavior and then defends against them. XDR goes a step further by including all systems, components, and layers in the analysis and threat mitigation. Like EDR, Extended Detection and Response leverages machine learning and AI techniques for threat analysis and automated threat response. Benefits of Extended Detection and Response By including all systems, components, and layers of the IT infrastructure, XDR increases the visibility and context of potential cybersecurity threats. The view is not limited to sub-areas and a comprehensive picture of the security situation emerges. Events can be correlated in order to respond more quickly with prioritized measures. The holistic view also improves the ability to respond proactively and automatically. Those responsible for IT security work more efficiently and in a more targeted manner. Security management is centralized and thus more productive.
<urn:uuid:2e4a03f7-82c1-4a64-af94-235916289f12>
CC-MAIN-2022-40
https://informationsecurityasia.com/what-is-xdr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00697.warc.gz
en
0.93883
632
2.828125
3
Why Isn’t There One Standard Polishing Process? Why isn’t there one standard polishing process? Here’s a key reason why we don’t have one standard polishing process: Single-fiber and multi-fiber ferrules are composed of different materials and have different shapes, diameters, hardness, and tolerances. Here are two quick examples: Single-fiber ferrules are often pre-shaped with either a flat surface or a conical tip (with various conical angles like 45-60 degrees) and pre-domed, or even pre-angled and pre-domed. Zirconia ferrules are available in several diameters: 2.5mm, 1.58mm, 1.25mm, and 1.0mm. Compounding this, there are many different connector components with considerations such as spring force and dimensional tolerances. For single fiber connectors, the most common material is ceramic, yet some are stainless steel. Multifiber ferrules are made of a type of plastic (a glass-filled polymer). READ THE FULL BLOG ARTICLE HERE: Why isn’t there one standard polishing procedure?
<urn:uuid:43e2e305-12c9-471b-bed8-a2dc14860732>
CC-MAIN-2022-40
https://www.lightbrigade.com/faq/why-isn%E2%80%99t-there-one-standard-polishing-process%3F
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00097.warc.gz
en
0.911503
260
2.53125
3
The way URL parsing is implemented in some Go-based applications creates vulnerabilities that could let threat actors carry out unauthorised actions, according to research by the Israeli cloud-native application security testing company Oxeye. Go, also known as Golang, is an open source programming language created for the large-scale development of dependable and effective software. Some of the biggest companies in the world use Go, which is backed by Google, to build cloud-native applications, including those for Kubernetes. Researchers from Oxeye have examined Go-based cloud-native applications and identified an edge case that may have significant ramifications. They have named the problem ParseThru, and it has to do with unsafe URL parsing. Go accepted semicolons as a valid delimiter in the query portion of a URL up until version 1.17. For more such updates follow us on Google News ITsecuritywire News
<urn:uuid:93d99682-c5fc-4aa9-b020-95c21b26bc1a>
CC-MAIN-2022-40
https://itsecuritywire.com/quick-bytes/go-based-apps-vulnerable-to-attacks-resulting-from-url-parsing-issue/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00097.warc.gz
en
0.951384
186
2.625
3
A Big Data Revolution in Astrophysics Humanity has been studying the stars for as long as it has been able to gaze at them. The study of stars has to led to one revelation after another; that the planet is round, that we are not the center of the universe, and has also spawned Einstein’s general theory of relativity. As more powerful telescopes are developed, more is learned about the wild happenings in space, including black holes, binary star systems, the movement of galaxies, and even the detection of the Cosmic Microwave Background, which may hint at the beginnings of the universe. However, all of these discoveries were made relatively slowly, relying on the relaying of information to other stations whose observatories may not be active for several hours or even days—a process that carries a painful amount of time between image and retrieval and potential discovery recognition. Solving these problems would be huge for astrophysics. According to Peter Nugent, Senior Staff Scientist of Berkeley’s National Laboratory, big data is on its way to doing just that. Nugent has been the expert voice on this issue following his experiences with an ambitious project known as the Palomar Transient Factory. The research endeavor is a collaborative effort between Cal Tech and Berkeley to study the formation of various astronomical phenomena. Groups used the telescope on Palomar Mountain in attempts to detect planets beyond our solar system, dwarf novae, core collapsed supernovae, blazars (super compressed quasars which suggest the existence of the supermassive black holes at the center of galaxies), and neutrino emissions among other things. The ‘transient’ in the group’s name is meant to denote that the discoveries are meant to be new, emerging, or moving, essentially in transition. Nugent’s group’s specific focus was on identifying new type a1 supernovae. The main focus, however, was not on the discovery of the phenomena but rather on the development of a machine learning system that could quickly identify worthy candidates. The process of identifying worthy candidates is straightforward. When an image is processed of a certain area of space, it is compared to an earlier image in a database of the exact same area of space. If the differences, or ‘subtractions,’ meet a certain threshold of significance, the transient is submitted for closer examination to the humans, where the people take their turn at classifying the potential phenomenon. Over around 800 operational days, 1.8 million images were taken. Those were turned into 32,000 models of a particular portion of a sky, or references. 1.4 million subtractions were performed, leading to 900 million discovery candidates and 45,000 transients (candidates that made it past the machine’s vetting process). Every single item was stored. It is not difficult to see where this data got big. Usually, the individual transient is discarded. Several factors, including time of day, cloud cover, and the amount of incoming starlight at a particular time, can lead to ‘significant’ readings that are not at all significant. The goal, simply, is to get the machine to classify better than the humans so the humans may only be bothered by the most serious of candidates. In true scientific fashion, the project’s first major discovery was made when everything possible went wrong with the computing system. According to Nugent, Cal Tech had changed a key IP address, the main programmer was in Israel and the entire process had to be done by hand. As it turned out, Nugent found his emerging type a1 supernova, occurring only hours before (plus the amount of time it took for light to travel there). However, that debacle pointed out areas of weakness and the system improved. According to Nugent, it is actually outperforming the humans in both speed and accuracy. While the improvement in accuracy is perhaps a tad unexpected, Nugent notes that the computer will never be classifying phenomena two hours after returning from the pub. As a result, three or four emerging type a1 supernovae or other similar-scale astronomical events are found each day according to Nugent. Contrast this with the beginning of the project, when it was a big deal when Nugent found the first one. Such a discovery rate is truly remarkable and could significantly advance the world (or universe) of astrophysics. That would be especially true if the PTF can accomplish its future goals in improving their survey. On the docket for Nugent is a five-fold expansion of the range, image time intervals of 100 seconds, and a candidate list turnaround of 10-20 minutes. However, since each image is stored and analyzed, the data requirements would obviously be significantly higher, a difficult proposition for a survey that used 175 out of 250 purchased terabytes. Indeed, according to Nugent, it is from image processing to candidate generation that potential bottlenecks would occur. The data, having been sufficiently whittled down, would easily move to classification and beyond from there. Astronomical discoveries are always exciting, even for the astrophyscs “uninitiated”. They give us a sense of where we stand in the universe and insight into our beginnings. Nugent and PTF are working to accelerate these discoveries using big data analytics and their preliminary results are promising.
<urn:uuid:31053c1a-f7ce-498b-9697-2e4b44824c57>
CC-MAIN-2022-40
https://www.datanami.com/2012/08/06/a_big_data_revolution_in_astrophysics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00097.warc.gz
en
0.963607
1,102
3.109375
3
The Defense Advanced Research Projects Agency (DARPA) has a clear vision for how the military will use Internet of Things sensors in the future: it’s zero. Well, N-ZERO to be exact, and it’s starting to make headway. Started in 2015, the initiative, Near Zero Power RF and Sensor Operations (N-ZERO), aims to tackle a conundrum of IoT. The Defense Department’s research arm notes that high-tech military sensors use active electronics to detect vibration, light, sound or other signals on the battlefield. Persistent sensing is used to protect troops at forward operating bases. This is important because troops need to know about enemy movements and changes to the battlefield. However, such sensors constantly consume power, often spending time processing what turns out to be useless data. That power consumption then saps the sensors’ battery life and also slows the development of new sensors, DARPA says. It’s also costly in two ways: the armed services need to constantly put new sensors into war theaters, and sensors that die can expose soldiers to danger. As DARPA notes, N-ZERO “seeks to overcome the power limitations of persistent sensing by developing wireless, event-driven sensing capabilities that would allow physical, electromagnetic and other sensors to remain dormant — effectively asleep yet aware — until an event of interest awakens them.” That program is starting to make progress, according to DARPA. DARPA Makes Progress on Lower-Powered IoT Sensors Through N-ZERO, DARPA is trying to dramatically improve the efficiency of low power sensors at rest. The program “intends to develop underlying technologies to continuously and passively monitor the environment and activate an electronic circuit only upon detection of a specific signature, such as the presence of a particular vehicle type or radio communications protocol.” N-ZERO wants to use the specific energy in signal signatures “to detect and recognize attention-worthy events while rejecting noise and interference.” According to SIGNAL, the official blog of the Armed Forces Communications & Electronics Association, N-ZERO “seeks to extend unattended sensor lifetime from weeks to years, cut maintenance costs and reduce the need for redeployments.” N-ZERO is also aiming to potentially cut the battery size for a typical ground-based sensor by a factor of 20 or more but maintain its operational lifetime. “What we can do today really doesn’t fulfill the vision of the Internet of Things,” Troy Olsson, DARPA’s N-ZERO program manager, told SIGNAL. “We can either connect devices that have power already, like your refrigerator, or devices that you can recharge every day or every couple of days, like a cellular phone. You can connect and interconnect those, and some people call that the Internet of Things.” For Olsson, true IoT will involve sensors everywhere that are untethered from either a power supply or from having to be recharged constantly. “We would like, for a long time, to stick out a sensor network to detect incursions or dangerous chemicals or other things that might be present around our forward operating bases,” Olsson says. “We can go out and change batteries every week or so, but that puts our soldiers in danger, and it requires a lot of manpower to support the maintenance of the sensor field.” The N-ZERO program has three phases. The first, which ended December 2016, took 15 months to complete, SIGNAL notes. The second and third phases will each take one year. Some research teams achieved goals in the program’s first phase that they were expected to reach much later, the blog notes. For example, DARPA has been able to create zero-power receivers that can detect very weak signals — less than 70 decibel-milliwatt radio-frequency (RF) transmissions, a measure that is better than originally expected. The system has also been able to detect objects correctly without raising a false alarm, which can crimp battery life. In the program’s current phase, the sensors need to distinguish between cars, trucks and generators in an urban environment at a close range, SIGNAL reports, and in the final phase, they will be required to classify those same targets from 10 meters (33 feet). “The ability to sense and classify cars, trucks and generators in ... both rural and urban backgrounds from a distance of a little over 5 meters away and being able to do that with almost 10 nanowatts of power consumption is a big accomplishment in phase one of the program,” Olsson says. Other Low-Power Wide Area Networks Abound As the blog IoT for All notes, the N-ZERO program is similar to how many existing low-power wide area network technologies extend battery life, including those based on the Sigfox and LoRa’s LoRaWAN technologies. Those LPWANs send updates to network gateways only a few times per day, “minimizing the power needed to take in data and transmit accordingly,” the blog notes. There are also cellular IoT protocols such as LTE Category M1 and Narrowband IoT, which use an extended discontinuous repetition cycle and power-saving mode to conserve power. Earlier this year, both Verizon and AT&T announced nationwide launches of LTE-M networks. The difference between LPWAN devices and N-ZERO is in the persistent sensing the military needs for its sensors. Commercial low power sensors can be scheduled to send data at specific intervals to conserve battery life. DARPA wants to go further with sensors that are asleep-yet-aware and can be woken at a moment’s notice, and also with RF receivers that constantly listen for signals but consume little power when a transmission isn’t happening, the blog notes. Where will all of this lead? Probably not to a product that soldiers can use in the field right away. “I suspect what we’ll have are significant field demonstrations, but someone else will need to come in and productize what we’re doing and operationalize it either for the Department of Defense or the commercial IoT,” Olsson says. “And I fully expect applications in both.”
<urn:uuid:fb874296-074e-4171-a03e-5b2361d4eda1>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2017/06/military-iot-darpas-n-zero-initiative-aims-conserve-power-iot-sensors
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00097.warc.gz
en
0.935543
1,313
2.890625
3
Can anyone explain the purpose of using constant ? I know that constant is literal values, and it can be resued in multiple objects. Is there any purpose of using constant ? Moreover, when we will create constant in application ? Discussion posts and replies are publicly visible 1.suppose you are providing Gender (1.Male 2.Female) in 10 form interfaces, so you don't have to hard code every time just create a constant and use it as value in all forms . 2.for same example , if in future you want add one more gender as (3.Others) so if you have used constant you have to update only your constant and the gender will be updated in all 10 form interfaces, if you don't use Constant in these type of scenarios you have to change gender manually in all 10 forms. © 2022 Appian. All rights reserved.
<urn:uuid:2248e146-fc61-4ea2-b572-01c10144b3a7>
CC-MAIN-2022-40
https://community.appian.com/discussions/f/best-practices/25436/why-we-use-constant/99100
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00298.warc.gz
en
0.910703
184
2.578125
3
Implementing the right ERP system for your business takes a great deal of consideration. After… Medical professionals are increasingly relying on technology to make the process of delivering medical care more efficient. The COVID-19 pandemic in particular has had a huge impact in how the healthcare industry has adapted to new technologies much quicker than previously. - 47% research doctors - 38% research hospitals and medical facilities - 77% book medical appointments And that’s only the beginning. Reliance on IT will only grow as new technologies like VR and AR come into use; the use of AI has seen a decline in human error when administering patient care; wearable devices can be used to track patient care and health remotely, saving both patients and healthcare workers valuable time and money, and freeing up medical centre appointments for more urgent needs. The rise of telemedicine and the Internet of Medical Things The Internet of Medical Things (IoMT) is the collection of medical devices and applications that connect healthcare IT systems using online computer networks. IoMT devices connect to cloud platforms where the data is stored and analysed. It includes: - Remote patient monitoring - Tracking patient medication orders - Tracking patient location - Wearable healthcare devices - Measuring patients’ vital signs through analytic dashboards and sensors on hospital beds Telemedicine relies heavily on using IoMT to remotely monitor patients in their own homes. Hugely beneficial on both sides, the treatment spares patients from travelling whenever they have a medical question or change in their condition. It saw a rapid increase when the COVID-19 pandemic began; between March 2020 and October 2021, over 88,000 healthcare workers delivered 78.3 million telehealth services to 15.6 million patients in Australia. Paired with smartphone applications, telemedicine allows the patients to send the information and updates to their doctors at just a few touches. With such ease and accessibility, it’s no wonder that the global IoMT market is predicted to grow to $254.2 billion in 2026. The importance of big data in healthcare Big data is extremely large data sets that can be analysed computationally to reveal patterns and trends, particularly relating to human behaviour. In healthcare, this refers to the vast quantities of data – created by the mass adoption of the internet and digitisation of all sorts of information, including health records and information – too large or complex for traditional technology to make sense of. The exponential growth of data has made it possible for people to learn more about their health and take better control of their lives. It is now possible for researchers to study not just isolated diseases or conditions, but the complexities of health and the factors that affect it. With the help of big data, healthcare professionals can make sure that all patients are getting the right treatment in the right time. It also helps in identifying which methods are working best for specific medical conditions. It can even be used to predict when a patient might get an acute case of asthma or diabetes before it happens. Data analytics has become a key component in healthcare research, as it can be used to identify trends and patterns in data that may not be obvious. This includes early detection of trends and patterns in data that affect health, such as epidemics and pandemics. - Preventing medication errors: big data can reduce medication errors that sometimes due to human error by analysing the patient’s records with all medications prescribed and flagging anything that seems out of place. - Identifying high-risk patients: using predictive analytics, some hospitals have been able to reduce the number of ER visits by identifying high-risk patients, increasing the better care for these patients. - Reducing hospital costs and wait time: big data holds enormous potential with cutting healthcare costs. Predictive analytics can help with staffing by predicting admission rates over the next few weeks. - Enhancing patient engagements: wearable devices, like Fitbits, that monitor steps taken, hours slept, heart rate, and others on a daily basis could help physicians improve patient outcomes, as well as reduce doctor visits for patients. - Electronic health record (EHR) use: EHRs make accessing patient data much easier and faster. Using AI to reduce human error The healthcare sector has been slow to adopt AI, but now it’s starting to catch up. In Australia, it’s estimated medical error results in 18,000 deaths every year. However, introducing artificial intelligence (AI) and machine learning (ML) use in healthcare has made significant strides in reducing human error: decreased medical errors, increased productivity, increased diagnosis and treatment accuracy, and reduced waiting time for patients. Using digital workers also means frontline workers can spend less time on admin and more time focusing on improving their patients’ quality of care. These include: - Patient diagnostics: AI can connect systems and streamline clinical processes to shorten the waiting time between tests, results, and treatment. - Patient self-service: this surged in use and popularity during the COVID-19 pandemic; medical centres replaced staff with digital worker kiosks so patients could register themselves. - Patient engagement: AI can be used to create patient portals that manage treatment plans and auto-schedule follow-up appointments and reminders. AI and ML usage have also changed the way doctors diagnose and treat patients by providing them with more data to work with; for instance, AI algorithms beat doctors in forecasting heart attacks by identifying patterns like geographic location, population and age groups, race, ethnicity, and sex, which helps to prevent the worst outcome. Wearable devices and the cloud The use of wearable devices like Fitbits has steadily gained popularity over the last decade. Medical health professionals are able to gather data from their patients passively and from afar to determine the likelihood of a major health event. Some of the most common devices include: - Heart rate sensors - Exercise trackers/step counters - Sweat meters (used mostly by diabetics to monitor blood sugar levels) - Oximeters (used by people with respiratory illnesses like asthma) Wearable technology saves both patients and healthcare workers much time and money, particularly for patients who live in regional areas. Travelling to hospitals or medical centres can become costly and very time-consuming; wearable devices can instead be used to track their conditions and health, ensuring that they only need visit their GP when necessary. The importance and immediacy of wearable devices has already been proven; a study conducted by UCLA saw Fitbit activity trackers were able to more accurately evaluate patients with ischemic heart diseases by recording their heart rate and accelerometer data at the same time. Virtual reality isn’t just for gamers anymore Headsets and specially designed software are giving medical professionals a whole new way to learn and deliver healthcare – and the closure of classrooms and clinical spaces over the pandemic has accelerated this growth. Virtual reality (VR) devices for healthcare have given frontline workers more options: - Lifelike surgical training programs - Supplemental clinical experience - Ability to view images with newly detailed perspective - Reduce skill fade by 52% - Improve retention by up to 75% In its use as a learning experience, VR could be revolutionary. Very few students can look over a surgeon’s shoulder during a surgery – but with a virtual reality camera, surgeons can stream operations, allowing students to be present using VR goggles. The possibilities are seemingly limitless and could fast-track the learning process for medical students. The future of healthcare and tech Innovations to better care for our quality of life, not to mention the ease and accessibility of our care, are growing through ingenuity and technology advancements every year. From wearable devices that can be easily used and tracked by patients, to advanced VR and AI technology which will increase learning capabilities and opportunities, tech in the healthcare industry is looking to a positive future. To learn more about tech trends in the medical industry and they can improve the quality of your health and care, contact the IT experts at INTELLIWORX today.
<urn:uuid:cbd315fc-252e-4633-849e-42fe57779f9d>
CC-MAIN-2022-40
https://intelliworx.co/au/blog/trending-technology-in-the-medical-sector/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00298.warc.gz
en
0.944333
1,653
2.859375
3
Data structures are computer programs that optimize how a computing process manages information in memory. Data structures often build on each other to create new structures, and programmers can adopt those structures that best fit a given data access pattern. Choosing the most efficient data structure for the job significantly improves algorithm performance, which accelerates application processing speeds. This enables computing systems to efficiently manage massive amounts of data within large-scale indexing, massive databases, and structured data in big data platforms. Many programmers and analytics teams will never need to program a data structure as they can use standard libraries. However, understanding data structure helps DBAs and analytics teams to optimize databases and to choose the best analytics toolsets for their big data environment. Types of Data Structures There are different types of data structures that build on one another including primitive, simple, and compound structures. - Primitive data structure/types: are the basic building blocks of simple and compound data structures: integers, floats and doubles, characters, strings, and Boolean. Integers, floats, and doubles represent numbers with or without decimal points. Characters are self-explanatory, and a string represents a group of characters. Boolean represent logical (true or false) values. - Simple data structure: build on primitive data types to create higher level data structures. The most common simple data structures are arrays and linked lists. - Compound data structure: builds on primitive and simple data structures, and may be linear or non-linear. Linear data structure forms a linear sequence with unique predecessors and successors. Non-linear data structure does not form linear sequences. The non-linear tree structure is built from hierarchical relationships. As the name suggests, compound data structures allow for a far greater level of sophistication. Another common method of categorizing data structure is by access function types. - Set-like access: Enters and retrieves data items from structures that may contain non-unique elements: sets, link lists, and some trees. - Key-based access: Stores and retrieves data items using unique keys: arrays, hash tables, and some trees. - Restricted access: Data structures that control the time and order of data item access: stacks and queues. Simple Data Structures The array data structure is one of the oldest and most common type of data structures. An array consists of elements that may be values or variables. The structure identifies elements using an index or key, which enables the data structure to compute the location of each element. The initial element’s memory address is called the foundation or first address. Data elements are indexed and sequentially stored in contiguous memory. There are many different types of array data structures. In one common example, many databases use one-dimensional linear arrays whose elements are the database records. Arrays may also be multi-dimensional if they access elements from more than one index. Arrays are the foundational structure for many other data structures including hash tables, queues, stacks, and linked lists. A linked list is the second most common type of data structure. It links elements instead of computing addresses from pointers. While an array mathematically computes data item addresses, linked lists store data items within its own structure. The structure treats each element as a unique object, and each object contains the data and the reference or address of the next one. There are three types of linked lists: singly linked where each node stores the next node’s address and the end address is null, doubly linked where each node stores the previous and next node’s addresses and the end address is null, and circular linked where each node links to the other in a circle, and there is no ending null. Compound Data Structures Computing systems combine simple data structures to form compound data structures. Compound structures may be linear or non-linear. Linear data structures Linear data structures form sequences. A stack is a basic linear data structure: a logical entity pictured as a physical stack or pile of objects. The data structure inserts and deletes elements at one end of the stack, called the top. Programmers develop a stack using array and linked list. A stack follows the order in which the computing system performs operations. The order is usually Last In First Out (LIFO). The primary operations for stack include 1) Push, which adds an item to the stack; and 2) Pop, which removes an item from the stack in reverse order to a Push. Stack also returns an isEmpty value: “true” on an empty stack and “false” if there is data. Instead of the stack LIFO order, the queue data structure places elements into a queue in First In First Out (FIFO) order. The insertion procedure is called Enqueue, which inserts an element in the rear or tail of the queue. The deletion procedure is called Dequeue, which removes elements from the front or head of the queue. To move the inserted element to the front of the queue to be Dequeued, the stack data structure must remove all elements between the new element and the front of the queue. Think of this structure in terms of a printer queue, where print jobs occur in order until they’re printed or cancelled. A queue can be built using array, linked list, or stack. Non-linear data structures Non-linear structures are multi-leveled and non-sequential. A graph data structure is a type of tree that presents a mathematical image of an object set with linked pairs. The interconnected object points are vertices and the links are edges. Hashing converts key value ranges into index ranges within an array. The hash table data structure associates each value in an array with a unique index that records the value’s insertion point and location, which accelerates data ingress and searches. Hash collisions are a common occurrence, especially when hashing very large data stores. Most hash tables include collision resolution, often separately storing keys/key pointers along with their associated values. Trees are hierarchical data structures, usually built as a top-down structure where each node contains a unique value and contains references to child nodes. In any type of tree, no node points back to the root or duplicates a reference. The top node is called the root. The elements directly underneath are its children; same level elements are siblings. If there is a level below these children, then the upper nodes are also parents. Elements occurring at the bottom of the tree are called leaves. Trees are hierarchical data structures that help organize data storage. The purpose of a tree is to store naturally hierarchical information, such as a file system. There are multiple types of trees. A Binary tree is a tree data structure where each node has no more than two children, respectively called the right child and the left child. Additional structures are the Binary tree, often used to efficiently store router tables for high-bandwidth routers; or the Merkel tree, which hashes the value of each child tree within its parent node. Merkel enables databases like NoSQL Apache Cassandra to efficiently verify large data contents.
<urn:uuid:8e8313ed-29d4-4d50-800d-ba1e5fe2bd7d>
CC-MAIN-2022-40
https://www.datamation.com/big-data/data-structure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00298.warc.gz
en
0.884754
1,476
3.96875
4
As technology changes at an ever-more-rapid pace, what legacy pieces linger on your network? It’s safe to say that just about every aspect of business is affected by the way technology has changed over the years. Even the least “plugged-in” people now communicate over VoIP phones, access the web over Wi-Fi, and send email hosted on the cloud. In the last year, virtual meetings became the norm and much of America clocked in to work from home. Over the decades, the stuff of science fiction has become part of everyday life. Just as the children of today look backward and find reruns of The Brady Bunch bizarre, those who have lived through technological changes may find themselves in the present feeling surrounded by unfamiliar devices and systems. The business world already used the processes and roles that our digital world has automated, though that doesn’t make the transition through progress any easier. Voicemail and texting take the place of answering services, CEOs and Presidents send memos over email instead of passing printed copies around, and digital files are stored in virtual folders rather than physical ones. Let’s take a look at just how quickly our lives have changed with the help of members of the Anderson Technologies team. Dawn of the Computer Era Technological progress started long before the age of computers. Along with social, familial, and world politics changing society in the 60s, the household television brought shared culture and new products into living rooms across the nation, putting visuals front and center and slowly phasing out the family radio. Washing machines, dryers, and refrigerators helped free up time for a new potential workforce, while elite business and science institutions brought room-filling computers for daily use, upgrading from number-crunching human calculators. The computer age sped up progress, making it more noticeable over shorter time periods. Starting in the 1970s, computers began to fit on desks, though personal use was still mostly unheard-of. It was the beginning of something new, and inventions like the floppy disc and the first video game console, Atari, were becoming household names—even if they weren’t quite commonplace. Far more ubiquitous were innovations like the VCR, which made home video recording affordable and brought top Hollywood releases to home televisions, and the microwave oven, which made cooking dinners lightning fast. Businesses and families who opted out of early updates made the decision for many reasons: finances, distrust, or reliance on older ways. Regardless of the reason, the result was often that subsequent changes became more difficult. In the 1980s, the efforts of both Microsoft and Macintosh were in full swing, but the truly “personal” computer was still novel. Principal Farica Chang’s father was an early adopter of the Tandy 1000, and Senior Administrator JR Reynolds played around with an IBM clone running DOS. Likewise, Principal Amy Anderson began using school computers to write papers, which at the time required inserting special character combinations into the text to add new paragraphs or format text. This decade saw the creation of the CD-ROM and the internet, though neither were popularized for years. The introduction of the Nintendo Entertainment System (NES) began Nintendo’s rise in the growing video game market and taught a generation the foibles of dust in technology. Business Development Executive Libby Powers remembers the pager boom of the 1990s. The messages and codes added a little thrill to life. Pagers made communication versatile, and were certainly more portable and affordable than the massive cell and car phones of the 80s. Nonetheless, pagers would just as quickly become obsolete in the face of cheap cellular advancements. Technologies built on one another opened the world up, and those who experienced it were able to create more and more of what they previously could only imagine. The Age of the Internet By the 1990s, a new generation was growing up using technology in their daily lives from childhood. Content Specialist Andrea Glazer loved playing Sesame Street games on her family’s Windows 95 machine, learning to use a mouse and keyboard from an early age. The internet was blooming into the World Wide Web, bringing a new e-commerce industry and the dot-com boom. New sites like Amazon, Ebay, and Yahoo looked far different from their modern incarnations. The popularization of chat rooms, instant messaging, and personal email began to globalize communication and shrink the virtual distance between family and strangers and between questions and answers, which encouraged discourse and knowledge-seeking that excited industry leaders. Amy Anderson recalls working at Southwestern Bell during this period: “I’ll never forget my co-worker, Brian, explaining how the world was about to change.” The world of entertainment also exploded in the 90s. Gaming consoles from Nintendo (Gameboy, SNES), Sega (Genesis, Dreamcast), and Sony (PlayStation) filled the market, and the homes of Anderson Technologies’ team members Quentin Topping, Shana Scott, and Joe Baker. Baker recalls a very 90s pizza party birthday with greasy fingers unwrapping a Sega Genesis II to resounding delight. The gaming market would quickly outgrow cartridge-based gaming in the decades to come and lead to the widespread adoption of Blu-ray discs as consoles expanded into full entertainment systems. By the end of the decade, the most popular computer operating system ever, Windows 7, hit the market and laid the groundwork for the major changes introduced in the 2000s. An AT&T sales rep convinced Principal Mark Anderson to try one of the first cell phones that could fit in a pocket, changing the way he communicated from then on. Today and Beyond Beyond the initial e-commerce explosion of the aughts, business and personal websites became commonplace and expected, with Google emerging as the most frequently used search engine. Breaking free of phone lines helped internet speeds handle the sudden burst in everyday traffic, making instantaneous data transfer and features like streaming video, online gaming, and cloud computing possible, powerful, and affordable. Here are some of the biggest technologies introduced to the world in this decade, expanding in usefulness, and creating new ways to live, share, and build: - USB Flash drives - MP3 players - SMS Texting - Online gaming - Smart Phones - Adobe Flash Player - Smart Watches - P2P File-sharing In the 2010s, technology and business built on the foundations of previous decades flourished. Shifting business models and profit driving brought us to the world of data collection and experience curation, all while increasingly more of the world operated online. Instead of running a server in a backroom or a basement, cloud computing used servers halfway around the world at a fraction of the cost. The Internet of Things connected everyday devices to the internet, creating a web of interconnectivity that thrills, but can also leave us vulnerable. Hybrid and electric vehicles became far more affordable to make and purchase, paving the way for automated and driving-assist cars programmed by AI algorithm, much like almost everything online. Resistance is often a built-in part of any change, and it can be easy to feel overwhelmed, confused, or downright afraid. More than 100 million PCs continue to run Windows 7, creating an easy portal for criminals to access networks and create catastrophic damage. Many feel that as long as the system still works, why should they need to change and learn a new system all over again? While the pace of innovation may now feel relentless, adoption can make life easier, expand possibilities, and keep businesses running safely and securely. Just because something still runs doesn’t mean it’s safe to use. Are you clinging to the technology of the past when more resilient options are now available to help bring your business into the future? You don’t need to be on the cutting edge to embrace change and expand your business’s possibility, flexibility, and security. Keeping yourself and your business chained to the past could lead to your own obsolescence in the future. Is it frustrating to keep up with technology? The pace of innovation isn’t slowing, so now is the time to reach out to a technology partner like Anderson Technologies to evaluate, architect and set up the right solution for you. Let’s see if we are a right fit to take this burden off your plate.
<urn:uuid:60d6ff2d-0dca-49c5-829a-b93c2f539b61>
CC-MAIN-2022-40
https://andersontech.com/the-70s-want-their-tech-back/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00298.warc.gz
en
0.946488
1,748
2.8125
3
Earlier this week, Google had published a research blog about who are the people and regions most prone to cyber-attacks via email. The blog states that Google stops more than 100 million harmful mails (spam), from reaching Gmail users. There were several malicious attackers who sought to profit from the pandemic, as more than 18 million malware and phishing related emails were tracked by Google’s robust servers. Google’s researchers had teamed up with researchers at Stanford University to study and analyze who were the preferred targets of these phishing emails and malwares. It was not surprising to note that the major factors at play were place of residence, type of device being used and whether ones’ data had previously appeared in a breach. After having aggregated the data over a five-month period, the below conclusions were drawn: - 42% of the attacks were targeted at residents of the USA, 10% at residents of the UK, and 5% towards the residents of Japan. - Most attackers tend to use a “one-size fits all” strategy and do not make the effort to localize their efforts. The same email template is used in several countries - However, there are some countries where regional attackers thrive. For instance, 78% of attacks targeting users in Japan were in Japanese. Similarly, 66% of attacks targeting Brazilian users were in Portuguese. It was noticed that the attackers do not plan random attacks. The attacks were pre-planned and aimed at topics that would garner maximum attention. The campaigns lasted for a maximum of 3 days on average. Though the targets changed very often, the method of attack used by the attackers was largely the same. Who was at a higher risk of the attacks? As is generally the case, not all users were at a high risk of being a victim of a cyber-attack. It was found that users in the age group of 55 to 64 years old were at the highest risk, whereas those in the age group of 18 to 24 years old were low-risk users. Users in Australia faced twice the risk of being a victim of cybercrime compared to the United States. Mobile-only users were at a lower risk of being attacked as compared to multi-device users. Tips to stay safe - Complete a Security Checkup for personalized and actionable security advice. - If appropriate, consider enrolling in Google’s Advanced Protection program, which provides Google’s strongest security to users at increased risk of targeted online attacks. - Enable Enhanced Safe Browsing Protection in Google Chrome to substantially increase your defenses against dangerous websites and downloads on the web. - Browse these additional tips to manage your online security and choose the right level of protection for yourself.
<urn:uuid:224d2a94-7280-4a79-80e3-82bb9f7f29c4>
CC-MAIN-2022-40
https://gbhackers.com/cyber-attacks-via-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00298.warc.gz
en
0.978156
569
2.625
3
With security breaches, digital crime and internet fraud continuously on the rise, the importance of safeguarding your information has never been greater. Many breaches are password related, and it’s not just major-brand companies or celebrities that are targeted. Hackers don’t discriminate. 81% of security breaches are due to weak or stolen passwords. (LastPass) What is two-factor authentication? In a 2019 survey by Cygenta of 1,000 people in the UK, 62% didn’t know what two-factor authentication was. Two-factor authentication (2FA) is an extra layer of security added to your log-in process; such as a code sent to your phone or a fingerprint scan, that then verifies your identity and helps to prevent cyber-criminals from accessing your private information so easily. 2FA offers an extra level of security that increases the difficulty for cyber thieves, because they need more than just your username and password credentials. 2FA is a subsection of multi-factor authentication (MFA), an electronic authentication method that requires you to prove your identity in multiple ways before you are given access to an account. Two-factor authentication is so named because it requires a combination of two factors, whereas multi-factor authentication can require more. Two-factor authentication requires that extra step — without 2FA, usually you simply enter your username and password to access an account, but two-factor authentication requires both something you know (your log-in details) and something you have (Eg. your phone). For example, if using a phone as your 2FA, once you enter your password, you’ll get a second code that is sent to your phone, and only after you’ve entered the code from your phone will you get access into your account. This code is known as an authenticator, a passcode or verification code. Without the code you can’t log on, even if you know the correct password. 2FA – You’re using it already Using a bank card at an ATM requires 2FA – something you know (your passcode) and something you have (your bank card). Why do you need 2FA? With the advanced techniques of hackers and slack originality of users with password creation, passwords alone are generally quite weak. Cyber criminals have turned to automated processes that can go through thousands of password combinations in minutes, so they don’t even have to monotonously go through a guessing process, they can even sleep easily whilst the procedure is done for them. So whilst the criminals are finding easier ways to hack, you need to use harder methods to prevent a successful attack. 2FA may seem like an added hassle, but without it you could be leaving yourself vulnerable. If you add something you have to allow access to your bank account, a cyber-criminal who knows your password won’t get any further without having your phone, for example, when it receives the verification code. By adding the extra security step means cyber criminals will struggle to access your account and move on to the next easier target. How 2FA works The factors of two-factor authentication are generally separated into three categories: - Knowledge: These factors require you to know something, like security a question, a PIN or a specific keystroke. - Possession: Something you physically possess, like a bank card that you need to insert into a device to gain entry. - Biology: Part of you to prove your identity, like a fingerprint or voice recognition. What are the different types of 2FA? There are indeed several types of 2FA available, all of them sitting within the categories listed above. Eg: - Hardware tokens: You need to have a physical type of token, eg a USB, to insert in your device before logging on. There are some hardware tokens that display a digital code (that changes – eg. RSA) and you must enter this code. - Software tokens: Apps that you download. A site that features this type of 2FA, sends a code to the app for you to enter to log in. - SMS: Here you receive a text message to your phone with a code to enter for access. - Push notifications: Another type of app authentication you download to your phone. When you enter your login details, a push notification is sent. A message appears on your phone asking you to confirm your login attempt. - Biometrics: This is verification by using something physical about yourself. The most common method is by using a fingerprint scanner. - Location: A method used by Facebook, this is where if an attempt to login to an account is made in an unknown / non-regular location it triggers an alert notifying an attempt was made on a new device / new location and you will normally receive a code to verify your identity if it is you. Does everyone offer 2FA? Not all sites use two-factor authentication, but some give you the option to activate it for your account. Some popular websites that offer 2FA include: Amazon, Facebook, Lastpass, LinkedIn, PayPal and Yahoo. But there are many more. Is 2FA 100% secure? Sadly no, no security measure is 100% guaranteed. It is a hacker’s ambition to beat the security measures in place to prevent them getting in, and they rise to the challenge until they win. There are also the concerns that users of 2FA can be complacent, thinking that by using 2FA means their password doesn’t need to be as complex. This is not the case, the more difficult to crack the password, the stronger the security. The other concern is that the most common 2FA method, using SMS authentication, is that SMS is less secure than using an authentication app. But it is still important remember that 2FA is still an added step of inconvenience for the hacker. Is 2FA a pain to use? Although many may regard 2FA as an added hassle, as technology improves, so 2FA becomes quicker and easier to implement. Verification codes generally take seconds to generate and deliver. Protect Yourself – 2FA is important 90% of passwords can be cracked in less than six hours. Despite no 100% guarantee, 2FA still makes it harder for identity theft and phishing via email to happen to you; cyber criminals need to gain more information than just your username and password. Use 2FA and let the hackers pass you over for the more convenient, lower-hanging fruit with the ‘123456’ / ‘password’ passwords! Offering several types of security test, we can help you check how secure your web, network, IT infrastructure is and even run a campaign to check on employee / colleague cyber security awareness with a bespoke phishing test. Find out more
<urn:uuid:781c19dc-4083-4a77-9a2e-281aed6ff426>
CC-MAIN-2022-40
https://www.logicallysecure.com/blog/the-importance-of-two-factor-authentication-2fa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00298.warc.gz
en
0.931523
1,419
3.265625
3
NETRA, has been designed and developed at Artificial Intelligence and Robotics under the Defense As like America,Britain, China and Iran, Indian government is also trying to establish the National Internet Scanning, by which they can track the terrorist activities and other National activities too. NETRA, will be using only 300 GB of the storage to store the scanned data. NETRA is the hardware that will be installed at ISP level and on over 1000 plus locations. This will be called a "NODES", and each will have the storage capacity of the 300GB. This means 1000 nodes x 300 GB will be = 300,000 GB plus.
<urn:uuid:567d658b-f0af-4d52-b2db-688b6f59a7ae>
CC-MAIN-2022-40
https://www.cyberkendra.com/2014/01/indian-government-to-launch-netra-for.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00298.warc.gz
en
0.947323
130
2.53125
3
A new SMS phishing (sometimes called “smishing”) campaign has been targeting UK residents. The HM Revenue and Customs (HMRC) tax rebate scams have been tricking many people into giving away personal information to attackers such as names, addresses, passport numbers, etc. The attack begins with an SMS message that either tells the user that they have a tax rebate they are eligible for, or that they own tax money. Once the user clicks the link included in the message, they are taken to a webpage targets online banking customers based on a sort code. By using the sort code, the attackers can identify which bank the victim uses. From there, a phishing page for that page is displayed where the user would enter their username, password, memorable words, 2-Factor Authentication, etc. This campaign uses an extensive workflow, making the user go through many pages and supply a lot of information. The phishing web pages that are used are designed to mirror that of the legitimate ones. Many different sites have been used, with new ones being added daily as old ones start to become marked as spam. By Anthony Zampino Introduction Leading up to the most recent Russian invasion of Ukraine in
<urn:uuid:73a7ca3c-4dd4-40b6-ad5d-3b564427183c>
CC-MAIN-2022-40
https://www.binarydefense.com/threat_watch/sms-text-message-phishing-campaign-targeting-uk-residents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00298.warc.gz
en
0.962816
243
2.5625
3
From stacks of student applications to endless inventory sheets, paper-based processes frequently prevent school staff from focusing on their core responsibilities, which ultimately affects students. Automation technology has the potential to absorb the time-consuming tasks that bog down administrative staff in educational institutions. With limited budgets, decision-makers regularly face tough decisions, and digital transformation often falls to the bottom of the priority list. As a result, educational institutions are generally slower to adopt transformation initiatives. However, when they do, the results can be meaningful and long-lasting. In Singapore, which is pursuing a “smart nation” strategy, the government recently announced plans to introduce Al technologies into five sectors. Education is one of the targeted sectors. In the meantime, start-ups in Singapore have begun to leverage emerging technologies to enhance processes such as applying for college, finding a tutor, or getting an answer to a math question. Overall, for schools to modernize in line with the smart nation vision, everyday processes also need to be automated. The effects of automated processes in education From enrollment to parent communications, many schools rely too heavily on paper-based methods. When you consider that 616,505 students in Singapore enrolled in school in 2018, it is clear that considerable time is wasted on inefficient processes. Whether a school system administers 100 students or 100,000 students, there are better ways to manage paper-based processes such as annual enrollment, applications for niche programs and parent permission forms for extracurriculars and field trips. These workflows often yield hundreds or thousands of paper forms that come across administrators’ desks each year. Positions like enrollment coordinators, academic directors and vice-principals—roles that traditionally wear many hats on a school’s administrative team—often manually handle the entire process for any given form. And it is easy to fall behind. Digitizing such processes allows administrators to focus on better serving students. For example, in a school system that may receive over 1,000 miscellaneous email requests from parents each month, staff can handle them through a well-designed, form-based automated process instead. The saved tame can then go toward mission-critical work rather than basic administration. So how can education leaders start to digitally transform how schools operate? Open your mind to improvement By pointing out inefficient or paper-based processes to relevant decision-makers, you encourage them to see how the team could function better. From there, another way to get everyone on board is to help them understand what other schools are doing to eliminate their pain points. Once you have obtained buy-in, share the benefits of using a low-code solution that enables everyone on the team to participate in digital transformation.= Empower staff with tech You cannot solve other teams’ pain points until you know what they are. By providing employees tools to visually map their processes, see how well they work, and uncover any bottlenecks or breakdowns, you empower them to find lasting solutions. Giving employees a role in the transformation process will also encourage broader adoption of the tools you deploy. Standardize new processes Standardize your automated processes by providing school and district-wide training. To ensure new employees learn and carry the forward the processes, build the teachings into their onboarding. By creating a center of excellence — or a digital process repository — you can prevent new processes from being set aside. Digital transformation applies to the education world too, and processes that make up the student experience are ripe tor automation. While school administrators may not be in the classroom with students each day, they can utilize automation and digital processes to create systems that better serve students. Click here to read how other schools and educational businesses have concerted digital transform into their future. Want to learn how your educational organization can benefit from the Nintex Platform? Request a demo from the experts.
<urn:uuid:3995c4f8-30d4-44c9-8f88-2794ca0901bb>
CC-MAIN-2022-40
https://www.nintex.com/blog/education-benefit-digital-transformation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00298.warc.gz
en
0.943471
798
2.625
3
Last weekend, France and Italy signed the Quirinal Treaty, a bilateral agreement designed to bolster political cooperation and commercial ties. The new treaty will open new business opportunities for businesses and could change the EU's internal dynamics. Such bilateral treaties in the EU are rare but consequential. The 1963 Elysée Treaty between France and Germany, which ended decades of deadly enmity, has powered the Franco-German engine, which has been instrumental in building the European Union. What we know First and foremost, the treaty calls for greater policy coordination and alignment between the two governments across security and defense, European affairs, economic development, sustainability, technology and space, migration, and education and training. The treaty also lays the foundation for further deepening the already strong industrial and commercial ties between the two countries. There are 4,000 Italian businesses in France and French businesses in Italy employ over 300,000 people. The treaty proposes to increase cross-border investment flows, develop new education and training opportunities for Italian and French youth, and calls for new joint industrial projects in defense, new technologies (AI, quantum computing, 5G/6G), digitization, sustainability (transport, infrastructure, and agriculture), and health. France proposed the treaty in 2017, but the election of a populist-led government in Rome in 2018 derailed the project for two years as bilateral tensions mount. By 2020, tensions abated. Rome and Paris conducted successful consultations on what would become the EU recovery package. The treaty itself was revived only when Mario Draghi, an independent pro-EU technocrat, became Prime Minister in February 2021. The treaty is largely designed to turn the page on a decade of misaligned policies and counterproductive rivalries. In Libya, French and Italian rivalries reduced their ability to weigh on the outcome and opened the door to Russian and Turkish interventions. In the past few years, disagreements over migration and controversial cross-border M&A projects, have greatly exacerbated tensions and mutual misunderstanding. What it means for businesses - Greater bilateral political cooperation will be most visible at the EU level. Both countries will coordinate their policy positions to shift the balance of power on key topics. In particular, businesses and investors should expect France and Italy to push for a loosening of the fiscal rules (which limit the deficit to 3% of GDP and debt to 60%) and strive to maintain accommodating fiscal and monetary policies to support the economic recovery. This appears more doable now than at any point during the past decade when Germany systematically opposed such an initiative. The close coordination will also ensure that Southern countries in Europe have more of a voice in decision-making, with Paris playing a pivotal role thanks to its close relationship with Germany. - More economic policy coordination should consolidate the recovery and fuel higher economic growth both countries need. Both economies stalled in the aftermath of the Great Financial Recession of 2008, fueling a host of social and political problems. Both want to capitalize on the EU recovery package momentum to durably improve economic growth and restore competitiveness. Sectors that should benefit include defense, space, future technologies (e.g., AI, 5G/6G, quantum computing), digitization (e.g., data, cybersecurity, connectivity, digital payments), and environmental sustainability in transportation, infrastructure, and agriculture. - Cross-border mergers and acquisitions activity should pick up. Both countries want to bolster the EU's strategic autonomy, reinforce value chains in strategic sectors, and enhance the competitiveness of European firms on global markets. M&As allowing for economies of scale and rationalization of production could help achieve these goals. However, recent mergers have been politically sensitive and controversial. In some cases, mergers have failed altogether, and businesses will still face headwinds if their acquisitions target 'strategic' sectors. - Lastly, the treaty also calls on both governments to step up the fight against corruption, money laundering, and tax evasion. To successfully leverage these new opportunities, businesses and investors will need to ensure they don't run afoul of the rules. The treaty is ambitious and has received broad political support in both countries, including from far-right parties like Lega in Italy and Marine le Pen in France. However, both countries will have elections soon and new governments could undo or stall the implementation of the treaty, curbing opportunities. Secchi, E. (2021, November 22). Le Traité du Quirinal : L'accord bilatéral franco-italien sera-t-il à la hauteur de ses ambitions et Défis ? Forbes France. https://www.forbes.fr/politique/france-italie-le-traite-du-quirinal-laccord-bilateral-franco-italien-sera-t-il-a-la-hauteur-de-ses-ambitions-et-defis/. © Copyright 2021. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
<urn:uuid:ee079d4a-bfa8-4645-9423-50311ed5f21e>
CC-MAIN-2022-40
https://angle.ankura.com/post/102hcfh/growing-out-of-debt-the-business-impact-of-the-franco-italian-quirinal-treaty
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00498.warc.gz
en
0.938614
1,062
2.640625
3
In part one of our series on reaching computational balance, we described how computational complexity is increasing exponentially. Unfortunately, data and storage follows an identical trend. The challenge of balancing compute and data at scale remains constant. Because providers and consumers don’t have access to “the crystal ball of demand prediction”, the appropriate computational response to vast, unpredictable amounts of highly variable complex data becomes unintentionally unplanned. We must address computational balance in a world barraged by vast and unplanned data. Before starting any discussion of data balance, it is important to first remind ourselves of scale. Small “things” when placed inside a high speed computational environment rapidly become significantly “bigger things” that immediately cease to scale. Consider the following trivial example to computationally generate all possible 10 character passwords containing only lowercase letters and numbers. One can quickly understand and imagine the size of the resulting output. It isn’t a challenging problem. The first password hashes in UNIX used a DES-based scheme and had the limitation that they only took into account the first 8 ASCII characters. Ten characters should be trivial. Applying the UNIX command “crunch” to generate all possible 10 character long passwords: You would need thirty five petabytes of data storage to hold the 3,656,158,440,062,976 possible 10 character long passwords. A total of 36,577 average 1TB laptop hard drives would be needed to store the file. At 200MB/s it would take 53,270 hours or about 6 years to write out the file, not including needing to swap out 36,577 disk drives. Was that your first guess? Small things quickly become bigger things. This trivial example shows that very small things have the potential to become unexpected, and significantly larger things. Don’t try this at home. However, data sets in analytics, research and science can often quickly look a whole lot like this apparently trivial example. Generating initial condition sets to understand how galaxies would collide in the early universe, for instance turns out is a very similar storage bound challenge. Large sparse matrices are generated computationally in memory in parallel in a few seconds resulting in huge outputs, for example the popular super computer powered Illustris project. Such workloads need carefully designed I/O paths to manage all the resulting write operations. Weather and climate modeling algorithms have a similar challenge generating appropriate “initial conditions” which are then computed against in parallel. Our password example is a simple streaming application. Few if any real world workloads would ever consider a single stream as their sole data path. Seeking into datasets and taking out pieces requires reading, computing and writing at the same time. It is essentially “the walking and chewing gum” challenge of modern computation. As computing tasks started to be distributed amongst multiple discrete devices to increase throughput and performance, storage systems needed to adapt quickly. Many to one architectures for shared storage simply do not scale. Even using our trivial password streaming example, you can imagine 1000’s of machines all trying to write simultaneously to a single storage instance. It would quickly become overwhelmed. To solve for the issue of storage being inundated by parallel requests, we designed our filesystems to also be parallel. Storage evolved from single network attached shared volumes, to clustered file systems, to parallel file systems, and on to massively distributed object based systems. Each evolution resulted in more complexity, each needed to track billions of data objects and their state while simultaneously trying to manage to achieve performance and balance. Storage evolution didn’t happen overnight. Early systems dating back to 1960 such as the cutely named Incompatible Timesharing System (ITS) had a functional concept of shared storage. ITS machines were all connected to the ARPAnet, and a user on one machine could perform the same operations with files on other ITS machines as if they were local files. In 1985 we saw Sun build the now ubiquitous NFS “Network File System”. The Next Platform recently discussed the evolution of NFS now on its fourth major iteration. The NFS we know and love is here to stay. The critical issue with storage is that it fails. Early systems architects quickly realized that without functioning and available storage, any computing was effectively useless. Clustered storage systems were assembled so that “failover” replaced the ever more common and undesired “fall over”. It is here where things become very tricky, very quickly. Distributed lock management on shared clustered systems with reads and writes in flight taking place during a failure event needed to be immediately reconciled for “consistency” and “concurrency”. Systems with quorum voting schemes were designed to solve for the “queenio cokio” issue, knowing which node effectively “had the ball” and was allowed to issue the update. It was a required, but highly inefficient choke point. More time passed. Enter the distributed file system and the concept now known as “eventual consistency”. Turns out if you don’t care about consistency you can eliminate a huge component of file system complexity. By distributing the workload across multiple devices and not needing any messy “quorum” rapidly improves performance as a key bottleneck is simply removed. Yes you lose only state, and can no longer prove that a write is consistent at a given moment in time, but that doesn’t always matter. Hyperscale quickly realized these benefits, they also threw away POSIX file semantics by using HTTP protocols to “POST” and “GET” data objects. Alas, decades of POSIX aware software is still challenged by these changes as they continue to expect “fread()” and their friends to still exist. Engineering teams were not daunted by this, they built “gateways”. Effectively, high performing data caches sitting between the end user and large web based object stores. Unfortunately if not executed well, this gateway approach brings us right back to square one by once again having a “gate” through which we march our complex data. What was old, is now new again. Hope was not lost. More time passed, and now one of the most ubiquitous file systems in HPC – Lustre enters the scene. It looks like a distributed file system, but provides us with the warm and familiar comforts of POSIX we have grown to love. The clever separation and routing of “metadata” and the distribution of physical data objects allows for the potential to build extremely highly performing filesystems that can handle the barrage of data from tens of thousands of nodes. The development effort over 14 years that has been put into Lustre software by a number of experts and practitioners is now reaping significant benefits for the community. Couple this clever software with ever more highly performing flash storage and super low latency interconnects that improve each I/O you can do per second and you are basically off to the races. Deploying multiple meta data targets (MDT) on top of metadata servers (MDS) to then be able to “index” and reference the highly distributed and striped object storage targets (OST) has resulted in a file system that not only scales but can also provide huge capacities up to a theoretical 16 exabytes, with 100 petabyte systems being found in production today. Intersect360 stated that 24% of 317 HPC systems in 2017 used Lustre or a derivative, with IBM’s Spectrum Scale (GPFS rebranded from 2015) deployments also taking up another identical 24%. Highly performing distributed POSIX file systems are clearly deployed and enjoyed by our community. On Costs and Religion So we are all set right? Not quite. Turns out that storage systems have significant “costs”. Not only in terms of the cost of “I/O”, but more of one directly associated with $/TB/operation. Engineering storage systems to provide “unlimited capacity” while maintaining high levels of performance is the largest set of headaches a HPC support team will have today. Making rapid analogies to Formula 1 or NASCAR is fairly obvious here. Designing a for purpose vehicle to travel at high speed and without component or driver failure is their goal. These obvious similarities to high performance storage should not be underestimated. Storage systems are fraught with potential risk, danger and expense. Fortunately due to a wealth of available storage solutions, you can effectively “buy yourself out of trouble” and throw money at your issue. A large number of proprietary, highly available and highly performing storage systems now exist. We are literally spoiled for choices, but as with anything, nothing in life comes for free. But, wait. Open source is free right? Commodity off the shelf hardware coupled with open source storage subsystems that can be continually tweaked and improved by you and the community provides what appears to be excellent value for money. Free software. Cheap hardware. It just seems somehow perfect. Until you factor in the expert humans and technical know how needed to support complex, fragile boutique storage systems. In large shared user systems, the storage needs to be continuously available and you simply can’t spend time to fiddle with it while people are trying to run important workloads. Again it is an issue of balance. There will always be two camps in the storage debate, forever separated by an almost “religious” chasm. “On prem or cloud” is another heavily debated topic with mostly similar outcomes. Our intent here is not to convince you one path is better than the other, only to accurately describe the landscape and show how with everything in technology selection the correct answer is: “It depends”. So what goes wrong with storage and why should we worry about failure? Any product, be it from a multi billion dollar global company or a nifty new block storage system fresh out of a research lab is “sold”. Humans can’t help but describe technology and “sell it” in glowing terms. There is distinct ownership of ideas and concepts and we love to describe them in positive detail. Rarely if ever does the human interaction and conversation ever turn to potential issues with an idea, product or service. Imagine visiting your local car dealership and being told what potential problems there are with the new vehicle you are thinking about purchasing. It will not happen. To restore balance, especially in storage, we need to have an open and candid conversation about how systems perform in the real world so they can be best matched to workload. When Adrian Cockcroft and team architected the systems needed to manage data deluge in the technology group at Netflix, they rapidly realized that “failure was indeed an option”. Unplanned data balance To understand failure, the Netflix group built “Chaos Monkey” (now on version 2). This is quite literally intentionally, destructive software that randomly wanders through their infrastructure terminating instances. Essentially creating computational and data havoc. Chaos Kong took this to a whole new level by “terminating” entire data centers. That’s bold. All of this effort so the team could understand the obvious impact of failure and what would happen to their data intensive global service. This is a wonderful case study in not only how to manage unplanned data balance, but also more importantly one of extreme technical transparency. We need more of this. So imagine you’re building a high performance file system… What would be the practical questions to ask yourself, your team or your storage vendor? We provide ourselves with 1,000’s of “benchmarks” to measure systems against hypothetical workloads. SpecFS and others all go a long way to describe “speeds and feeds” associated with storage. IOPS, TB/s, resilience, endurance, security, management, $/TB, the list goes on. But a more meaningful set of questions need to be asked. This is why here at The Next Platform we dive deeply, with unbiased long form detail into claims, and commentary to provide a meaningful and thoughtful analysis of systems and products. We do this work so you aren’t caught out, or in anyway surprised like in our earlier 35 petabyte “password” example. The third part of “Practical Computational Balance” will focus on software challenges. Algorithms and code sit on top of all the storage systems and computing infrastructure we carefully select, design and build. Balance there can only be achieved through careful and considerate integration of both software, hardware and humans. About the Author Distinguished Technical Author, The Next Platform James Cuff brings insight from the world of advanced computing following a twenty-year career in what he calls “practical supercomputing”. James initially supported the amazing teams who annotated multiple genomes at the Wellcome Trust Sanger Institute and the Broad Institute of Harvard and MIT. Over the last decade, James built a research computing organization from scratch at Harvard. During his tenure, he designed and built a green datacenter, petascale parallel storage, low-latency networks and sophisticated, integrated computing platforms. However, more importantly he built and worked with phenomenal teams of people who supported our world’s most complex and advanced scientific research. James was most recently the Assistant Dean and Distinguished Engineer for Research Computing at Harvard, and holds a degree in Chemistry from Manchester University and a doctorate in Molecular Biophysics with a focus on neural networks and protein structure prediction from Oxford University.
<urn:uuid:dd288310-0c40-435f-87a1-8942a7d2547c>
CC-MAIN-2022-40
https://www.nextplatform.com/2018/03/15/practical-computational-balance-contending-with-unplanned-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00498.warc.gz
en
0.948276
2,810
2.59375
3
10If your organization is already in the cloud, you can better prepare for your cloud needs with machine learning. Machine learning is changing the way developers build applications to customize user experience. It’s extremely beneficial to know how machine learning works with huge amounts of data and to possess knowledge of certain machine learning systems. Today, machine learning is everywhere. And when machine learning is in conjunction with the cloud, complex machine learning algorithms can be greatly simplified. So, what exactly is machine learning? As people, we gain knowledge based on past experiences. Similarly, machine learning develops useful artificial intelligence, taking data and processing it in a software that allows computers to basically think for themselves without explicit programming. It’s easy to see why machine learning is taking off in the cloud computing industry. Most businesses are data driven. Many successes and failures depend on the data that businesses analyze and make decisions on. Machine learning can help organize your data and make it easier to build applications. Many enterprises have data-driven goals that can fit into 3 categories. - Reflective Analysis & Reporting – Before making decisions to change any part of your organization’s infrastructure, it’s important to have key insights and data that represent your IT environment’s current condition. Applications can create so much data. This data could give you insights on how users engage with your app or web services and user experience. Machine learning goes through this data for you and helps you better understand the way your app performs, improving user experience. - Real Time Processing – If you’re trying to regularly keep up with changing trends or how your business is doing, machine learning helps process huge amounts of streaming data to a dashboard interface. It can identify patterns and provide recommendations. - Predictions – With the live streaming data, machine learning can also predict your users’ experience and let you know if you should make changes. These predictions can improve overall user experience and, in return, improve the performance of your app or web service. Along with these data-driven goals, machine learning also helps with fraud detection, web search results, email spam filtering, network intrusion detection, pattern and image recognition, and much more. Many cloud solutions contain machine learning algorithms, including IBM i, AWS, Azure and more. Connectria delivers cloud solutions and services that are secure, reliable, fast and secure. If you’d like to learn more, please contact us.
<urn:uuid:88f289f0-31d7-4a45-8790-fe39054fa2e7>
CC-MAIN-2022-40
https://www.connectria.com/blog/3-data-driven-goals-that-machine-learning-tackles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00498.warc.gz
en
0.918954
495
2.671875
3
For several decades, enterprise IT applications were focused solely on handling structured data which have a predefined model and are organized in a well-defined form that makes it easy to store, query, process and manage them. Indeed, the vast majority of transactional enterprise applications such as CRM (Customer Relationship Management) and Enterprise Resource Planning (ERP) systems are very effective in handling large amounts of structured data through the use of popular Relational Database Management Systems (RDBMS) and the Structured Query Language (SQL). In recent years, we are however witnessing the emergence of new applications that deal with large volume of unstructured data as well such as flat binary files containing text, video, and audio. This is, for example, the case with social media data and data used in multi-sensor and Internet-of-Things (IoT) applications. Unstructured data are not necessarily devoid of a structure as they may include an encoding structure and metadata associated with them. Nevertheless, they need a totally different approach to their storage and handling which led to the emergence of entirely new types of databases called NoSQL databases. The NoSQL databases are used to handle the vast amount of unstructured data that are nowadays part of the trending BigData applications. NoSQL databases have no strict schema requirements and do not necessarily guarantee the Atomicity, Consistency, Isolation and Durability properties of RDBMS systems. Rather, NoSQL systems tend to trade consistency in favor of high availability while adhering to the “BASE properties” which are outlined below: There are different types of NoSQL databases that meet these properties which have been developed to serve various purposes and applications. The most prominent types include: Given the different types of NoSQL databases, system architects and developers are offered with various database options for handling unstructured data in their applications. Selecting the right option is primarily a matter of understanding their properties and use. In particular, it’s a good practice to use a document store when your data comprises of collections of similar entities which are however semi-structured and sparse rather than conventional tabular data. As a prominent example, a document store is a good choice for storing the data of a blog. This is because blog posts can be stored as indexed documents, which could be ordered and/or retrieved by properties like their author, subject and authoring date. Posts are typically unstructured or semi-structured yet they comprise the above-listed metadata. It’s probably a good idea to use a key-value store or a column store when scalability is your main concern. In this context, scalability is reflected in the size of the data and the overall load to be put on the system. In most cases writing unstructured data very fast is what is required for various applications. This is the reason why a column store can be a good idea for a Twitter-based application e.g., an application that leverages vast amounts of Twitter data for branding or sentiment analysis. Twitter applications need to deal with many posts that come at high speed i.e. they feature very high rates of ingestion as hundreds of gigabytes of data are posted on Twitter every few minutes. Moreover, processing of data from Twitter applications requires queries based on the user or the date of the tweet which can be supported by the key-value store or a column store. Graph databases are the right choice when data traversal is a primary concern. This is usually the case in applications where data are represented in graph format in order to represent in an intuitive way the relationships between the different entities. The most characteristic examples of applications that can really benefit from graph databases are social networking applications where analyzing social graphs is one of the most important tasks. In such applications, social network participants (e.g., persons) are represented as nodes and their relationships (e.g., friend or follower relationship) as an arc or edge. In this context, graph databases facilitate queries based on the names of the nodes while at the same time easing the creation of node groups. Moreover, they are an excellent choice when multiple nodes of the graph need to be traversed (e.g., in order to understand how two or more people can get to know each other through their social graph). While there are many uses of NoSQL databases, it’s always important not to be deceived by the hype around NoSQL and BigData. Conventional RDBMS systems remain the primary choice for most applications, especially when you need to process structured data and produce reports. NoSQL is not the ideal tool for in reporting, which is a very important function in most enterprise applications. Hence, the use of NoSQL should be avoided in cases of uniform and structured data and also in cases of legacy systems that already make use of an RDBMS. By and large, NoSQL is certainly a powerful tool in your arsenal for surviving in a data-driven society, especially in cases of BigData applications. We hope that our recommendations would help you choose the right database for your storage needs. Embedded Finance: Enabling New Customer Facing Intelligence The Last Mile of the Connected Customer Journey How will the Smart Restaurants of the Future look like? Cognitive Customer Service: A Blueprint for Business Success Six Ingredients of Data Management Intelligence Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:4529a42d-5ff7-46b9-95b8-e145fe8c68b4>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/nosql-databases-how-to-make-the-right-choice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00498.warc.gz
en
0.930892
1,313
2.796875
3
In this series, we will be using the flowchart below to follow the process of determining which adware we are dealing with. Our objective is to give you an idea of how many different types of adware are around for Windows systems. Though most adware will be classified as PUPs, you will also see the occasional Trojan or rootkit, especially in the types of adware that are harder to detect and remove. AdvertisementsIt all starts with advertising. To give you an idea how much money goes around in this industry, the US online ad spending for 2016 was estimated at $ 62 Billion. Anyone that is able to grab a chunk of that will be very happy to do so, even if the methods are considered iffy. Some will not shy away from criminal behavior when that kind of money is involved. Two of the fraudulent methods to grab some of that money are called ad fraud and adware. If you want to learn the difference between these two please read my blog post, Adware vs Ad fraud. In this post, we will concentrate on adware, which basically boils down to some program on your computer showing you advertisements that do not come from the websites you are visiting. Identify the sourceWe will use Process Explorer to identify the process that is behind an advertisement. Usually, this will be a browser and you will recognize it as such. But sometimes, these advertisements pop up as windows without title bars. In cases like these, you can use the cross-hairs in the Process Explorer menu, as shown below: Drag and drop the cross-hairs on the window you are curious about and in the Process Explorer list of running processes the process responsible for the window will be selected (showing in blue). You now have the name of the process and, in case there are more instances of that process, the Process Identification (PID) associated with it. Check where the process is connecting toThis is optional since it almost never provides any information that is useful in the removal process. Extra research, however, could tell us what family the adware belongs to and what characteristics you may expect as a result. So, if you like, you can use the Windows built-in (after XP) tool Resource Monitor (resmon). To start Resource Monitor, you can use Windows Key + “R”, type “resmon” in the “Run” box and click OK. Under the Network tab > Network activity, you will find the most specific information for any connected process. If one process has several open connections you can click the “Image” column header to sort the processes alphabetically, which provides a better overview of what a given process might be doing. Also, check if the PID listed in Process Explorer matches the one in Resource Monitor. This should be done to make sure that you are looking at the process that is showing the advertisement. Browsers firstAs this will be the most common case, let’s deal with it first. The window showing the advertisement is a window or new tab of your default browser. Some adware authors find it easier or more effective to open the Microsoft browser that came with the OS, so they will open Edge for Windows 10 and Internet Explorer (IE) for earlier versions. Clear your browser's cacheIn Edge, the procedure is: - Click the Hub icon, click “Clear History” - Select the appropriate options. Note that clearing the “Cookies and saved website data” will result in you having to login at every site again. - Click the “Clear” button. For Internet Explorer: - Click the gearbox icon - Select Internet Options - On the General tab click on the Delete button under Browsing history - Select the appropriate categories. Note that clearing the “Cookies and website data” will result in you having to login at every site again. - Click the Delete button if you are happy with your choices. - Click the menu button and choose Options. - Select the Advanced panel. - Click on the Network tab. - In the Cached Web Content section, click Clear Now. - On your browser toolbar, click More (3 dots) - Point to More tools, and then click Clear browsing data. - Select the items that you want to clear. - Click the Clear browsing data button. - In the Opera Menu choose Settings - Select Privacy and Security - Under Privacy click the Clear browsing data... button - Delete the items you wish to delete - And click on the Clear browsing data button Removing extensions and toolbarsExtensions and toolbars are so closely related that removing the extension will usually take the toolbar out as well. Internet Explorer: Tools (gear icon) > Manage add-ons > Toolbars and Extensions > Select the one(s) you don’t trust one by one and click “Disable” Firefox: Menu (horizontal stripes) > Add-ons > click on “Disable” behind the ones you don’t trust or don’t recall installing. Chrome: Menu (horizontal stripes) > Settings > Extensions > Uncheck “Enabled” behind the ones you don’t trust or don’t recall installing. Opera: click the Opera icon > Extensions > Extension Manager > click on Disable below the ones you don’t trust or don’t recall installing. - Identify the process - Clear browser caches - Remove browser extensions and toolbars - Winsock hijackers - DNS hijackers
<urn:uuid:10fa2703-e3e4-45f0-8b9c-5973fb0ebff2>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2017/04/adware-the-series-part-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00498.warc.gz
en
0.871441
1,189
2.515625
3
Protecting students online is often a moving target. Online threats are emerging as rapidly as ever, and minors are often seeking out inappropriate content. This makes it challenging for an educational institution to manage online threats. DNSFilter has produced this guide in order to help you understand and meet CIPA compliance for your school. What content do I need to block? |Adult Content||Alcohol & Tobacco||Drugs| |Gambling||Hacking & Cracking||P2P & Illegal| |Search Engines & Portals||Weapons||Botnet| |Cryptomining||Malware||Phishing & Deception| What are the requirements of CIPA? The Children’s Internet Protection Act (CIPA) regulation centers around adopting an “Internet Safety Policy” (more well-known as an Acceptable Use Policy) which sets forth in writing how you will use protection measures, monitor technology use, and educate minors on safe use of technology. The FCC requires also that, “Before adopting this Internet safety policy, schools and libraries must provide reasonable notice and hold at least one public hearing or meeting to address the proposal.” A provision in the Protecting Children in the 21st Century Act also requires schools to incorporate the subject of appropriate online behavior (including social networking sites and cyberbullying awareness and response) into their Acceptable Use Policy and annual school training. According to the FCC, there are five elements the Acceptable Use Policy must address: - Access by minors to inappropriate matter on the Internet; - The safety and security of minors when using electronic mail, chat rooms and other forms of direct electronic communications; - Unauthorized access, including so-called “hacking,” and other unlawful activities by minors online; - Unauthorized disclosure, use, and dissemination of personal information regarding minors; and - Measures restricting minors’ access to materials harmful to them. DNSFilter is able to help in meeting all five of these requirements. Although your school/library is responsible for writing a policy and monitoring technology use in the classroom, you can leave the enforcement of it in our hands. Our product is one of the most lightweight yet effective ways to ensure your entire network is protected from inappropriate content and online threats. How should I setup DNSFilter to be CIPA compliant? Using the FCC categories listed above, you can setup DNSFilter to be your primary solution to defend your network against malicious internet traffic. - Prevent access by minors to inappropriate content. According to the FCC, ”The protection measures must block or filter Internet access to pictures that are: (a) obscene; (b) child pornography; or (c) harmful to minors (for computers that are accessed by minors).” DNSFilter has a feature to turn on a CIPA compliant policy with a single click. We also automatically block child pornography for all customers, through our partnerships with the Internet Watch Foundation and Project Arachnid - Enforce the safety and security of minors when using email, chat, and other direct electronic communication. The most effective way to do this is by blocking the following categories: Blogs & Personal Sites, Chat & Instant Messaging, Media Sharing, Message Boards & Forums, Social Networking, Streaming Media. You may consider whitelisting sites that you view as benign or are able to appropriately supervise in the classroom or library environment. - Prevent unauthorized access, hacking, and unlawful activities. There are two things that should be put in place in order to prevent hacking and unlawful activities by minors on the network. The first is to apply filtering policies to the categories of Hacking & Cracking, and P2P & Illegal, as well as turn on all of our threat categories. - Avoid unauthorized disclosure of personal information regarding minors. No action is required on your part to apply this for our product. DNSFilter does not collect personally identifiable information for adults or minors. - Restrict minors’ access to materials harmful to them. The best way to restrict access is to ensure that minors cannot circumvent DNS settings. You can do this by implementing DNS firewall rules. This is the only way to prevent minors from tampering with network adapter settings on their local device and escaping your filtering policy. You should configure all outbound DNS traffic on port 53 to point to our servers. We also have an article that goes into greater depth on Preventing Circumvention. The other side of the coin is that, under CIPA, adults should have the ability to bypass filtering policies for lawful purposes or bona fide research. There are two options for this. The first is to set up a bypass password. You can do this in the dashboard by navigating to Policies -> Block Pages and you will be able to set a password which can then be given to staff. The second option is to implement NAT IPs , which allows you to have multiple policies based on network segments. This would allow staff computers which are on a separate LAN/vLAN to have different policies than students. Other resources: - Consortium for School Networking’s Guide for School Districts, which includes links to sample Acceptable Use Policies. - The FCC’s Children’s Internet Protection Act (CIPA) - American Library Association’s CIPA Analysis
<urn:uuid:9e343f94-fa3f-429f-82bc-28094509ff9b>
CC-MAIN-2022-40
https://help.dnsfilter.com/hc/en-us/articles/1500008113941-CIPA-Compliance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00498.warc.gz
en
0.926325
1,089
2.75
3
Apache Kafka is an event streaming platform. Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments. Servers: Kafka is run as a cluster of one or more servers that can span multiple datacenters or cloud regions. Some of these servers form the storage layer, called the brokers. Other servers run Kafka Connect to continuously import and export data as event streams to integrate Kafka with your existing systems, such as relational databases, as well as other Kafka clusters. To let you implement mission-critical use cases, a Kafka cluster is highly scalable and fault-tolerant: if any of its servers fails, the other servers will take over their work to ensure continuous operations without any data loss. Clients: They allow you to write distributed applications and microservices that read, write, and process streams of events in parallel, at scale, and in a fault-tolerant manner even in the case of network problems or machine failures. Kafka ships with some clients included, which are augmented by dozens of clients provided by the Kafka community. Clients are available for Java and Scala including the higher-level Kafka Streams library, for Go, Python, C/C++, and many other programming languages as well as REST APIs. Kafka combines three key capabilities so you can implement your use cases for event streaming end-to-end with a single battle-tested solution: To publish (write) and subscribe to (read) streams of events, including continuous import/export of your data from other systems. - To store streams of events durably and reliably for as long as you want. - To process streams of events, as they occur or retrospectively. All this functionality is provided in a distributed, highly scalable, elastic, fault-tolerant, and secure manner. You can choose between self-managing your Kafka environments, and using fully managed services offered by a variety of vendors. Kafka Beat is designed on a consumer API to collect data from a Kafka topic only. In addition to command line tooling for management and administration tasks, Kafka has five core APIs for Java and Scala: - The Admin API to manage and inspect topics, brokers, and other Kafka objects. - The Producer API to publish (write) a stream of events to one or more Kafka topics. - The Consumer API to subscribe to (read) one or more topics and to process the stream of events produced to them. - The Kafka Streams API to implement stream processing applications and microservices. It provides higher-level functions to process event streams, including transformations, stateful operations like aggregations and joins, windowing, processing based on event-time, and more. Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams. - The Kafka Connect API to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka. For example, a connector to a relational database like PostgreSQL might capture every change to a set of tables. However, in practice, you typically don't need to implement your own connectors because the Kafka community already provides hundreds of ready-to-use connectors. To understand Kafka in more detail, read the Documentation.
<urn:uuid:37fdb19b-2e7e-41a9-a7e7-7ec09edb68cc>
CC-MAIN-2022-40
https://docs.logrhythm.com/docs/OCbeats/kafka-beat
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00498.warc.gz
en
0.933795
734
2.609375
3
OCTAVE stands for Operationally Critical Threat, Asset, and Vulnerability Evaluation methodology. This technique focuses on assessing organizational risks, rather than technological risks, for example if a company experiences a data breach, which could impact that business operationally. This methodology was initiated by Carnegie Mellon University (USA) and the CERT (Computer Emergency Response Team) Division of the SEI (Software Engineering Institute) in 2003. It is generally aimed at small to medium sized businesses of less than 100 people, and would be coordinated by Management and Operations rather than Technical Teams.1 The benefits of this approach are: - Assists in the identification of mitigation techniques - Contributes to risk management and awareness - Encourages cross-team collaboration - Reduces the need for excessive documentation - Gives a reliable asset-centric view - Highly customizable for security teams and risk environments - Provides repeatable and consistent results OCTAVE is a self-directed approach, meaning that people from an organization take responsibility for setting the organization’s security strategy, which can make this method difficult to scale. OCTAVE also assumes that the company has a broad knowledge of the business and security processes and can conduct all of the necessary activities itself. As stated by the Software Engineering Institute; OCTAVE Allegro is a methodology to streamline and optimize the process of assessing information security risks so that an organization can obtain sufficient results with a small investment in time, people, and other limited resources. It leads the organization to consider people, technology, and facilities in the context of their relationship to information and the business processes and services they support.2 OCTAVE-S is a variation of OCTAVE tailored to smaller organizations (less than 100 people). OCTAVE-S is led by a small, interdisciplinary team (three to five people) of an organization’s personnel who gather and analyze information, producing a protection strategy and mitigation plans based on the organization’s unique operational security risks. To conduct OCTAVE-S effectively, the team must have broad knowledge of the organization’s business and security processes, so it will be able to conduct all activities by itself.3 Other Threat Modeling Methodologies To learn more about other methodologies please visit Threat Modeling Methodologies. 1. Software Engineering Institute, Threat Modeling: 12 Available Methods (2018) https://insights.sei.cmu.edu/blog/threat-modeling-12-available-methods/ 2. Software Engineering Institute, Introducing OCTAVE Allegro (2007) https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=8419 3. Software Engineering Institute, OCTAVE®-S Implementation Guide, Version 1.0 (2005) https://resources.sei.cmu.edu/asset_files/handbook/2005_002_001_14273.pdf Bringing you the latest on all things threat modeling and architectural security.
<urn:uuid:6c810dc3-0850-4f32-a001-4bf4d8d6a26a>
CC-MAIN-2022-40
https://www.iriusrisk.com/resources-blog/octave-threat-modeling-methodologies
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00498.warc.gz
en
0.905174
657
2.734375
3
In late September, the U.S. Department of Energy (DOE), in partnership with the Manufacturing Innovation Institute (CyManII), announced an investment of more than $1 million in five projects to help make advanced manufacturing processes and supply chains more cybersecure. The selected projects will work to not only increase the efficiency of the advanced manufacturing technologies that make our clean energy future possible, but will also directly address existing challenges that make them expensive and difficult to secure. “Manufacturing processes and technologies are changing quickly as the sector adapts to increased energy efficiency and resiliency,” said Assistant Secretary for Energy Efficiency and Renewable Energy Kelly Speakes-Backman. “DOE’s investment in cybersecurity innovation ensures that as we build America’s clean manufacturing future, we’re also securing and protecting our supply chains, industrial control systems and infrastructure.” The five selected projects below cover a range of technical objectives identified by CyManII that strengthen the cybersecurity infrastructure of advanced manufacturing while optimizing energy efficiency: - GE Research: Design, implementation and demonstration of the building blocks for secure and energy-efficient automation components used in manufacturing. - Indiana University: Development of an Industrial Internet of Things-based energy management framework that incorporates smart manufacturing, energy usage and cybersecurity data to identify and evaluate energy-saving opportunities in real-world industrial environments. - Purdue University: Construction of a secure, scalable, open shop-floor data hub for integrating, assessing and indexing manufacturing data streams for more efficient access. - Texas Tech University: Development of a framework for determining baselines for secure automation of advanced manufacturing, specifically demonstrated in chemical conversion processes. - University of California, Irvine and Omnigence: Establishment and evaluation of methods for securing the semiconductor supply chain. CyManII was launched Sept. 1, 2020, as the DOE’s cybersecurity in energy efficient manufacturing Institute. These selections were the result of CyManII’ s open competitive solicitation and rigorous review process. CyManII is a partner within the Office of Energy Efficiency and Renewable Energy’s (EERE) Advanced Manufacturing Office and is co-managed by the DOE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER).
<urn:uuid:72556624-d2c9-4316-a08d-0999a066a725>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/education/doe-announces-1-million-funding-for-five-new-projects-at-cybersecurity-manufacturing-innovation-institute/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00698.warc.gz
en
0.905508
463
2.625
3
A common cryptographic protocol used to protect the majority of website communications and some virtual private networks is also increasingly being used by threat actors to protect their attacks, according to a new report. “In 2020 we found that 23 per cent of malware we detected communication with a remote system over the internet were using TLS,” the report says. “Today it is nearly 46 per cent.” Much of this can be linked in part to the increased use of legitimate web and cloud services protected by TLS — such as Discord, Pastebin, Github and Google’s cloud services — as repositories for malware components, as destinations for stolen data, and even to send commands to botnets and other malware, the report adds. It’s also linked to the increased use of Tor and other TLS-based network proxies to encapsulate malicious communications between malware and the actors deploying them. Google’s cloud services were the destination for nine percent of malware TLS requests, closely followed by India’s. Last month Sophos saw a rise in the use of Cloudflare-hosted malware — largely because of a spike in the use of Discord’s content delivery network, which is based on Cloudflare. This accounted for four percent of the detected TLS malware that month. Researchers found over 9,700 malware-related links to Discord. Many were Discord-specific, targeting the theft of user credentials, while others were delivery packages for other information stealers and trojans. Nearly half of all malware TLS communications went to servers in the United States and India, researchers found. Researchers also saw an increase in TLS use in ransomware attacks over the past year, especially in manually-deployed ransomware. This is in part, the report says, because of attackers’ use of modular offensive tools that leverage HTTPS. Sophos argues that malware communications typically fall into three categories: downloading additional malware, exfiltration of stolen data and retrieval or sending of instructions to a command and control server, all of which can leverage TLS. The vast majority of malicious TLS traffic is of the first kind: loaders, droppers and other malware downloading additional malware to the system they infect. TLS is used to try to evade basic payload inspection. “It doesn’t take much sophistication to leverage TLS in a malware dropper,” according to the report. “because TLS-enabled infrastructure to deliver malware or code snippets is freely available. Frequently, droppers and loaders use legitimate websites and cloud services with built-in TLS support to further disguise the traffic.” The PowerShell-based dropper for LockBit ransomware was observed retrieving additional script from a Google Docs spreadsheet via TLS, the report notes, as well as from another website. And a dropper for the AgentTesla information stealer also has been seen accessing Pastebin over TLS to retrieve chunks of code. While Google and Pastebin often quickly shut down malware-hosting documents and websites on its platform, attackers simply create new ones for their next attack. Malware operators can use TLS to obfuscate command and control traffic, the report points out. By sending HTTPS requests or connecting over a TLS-based proxy service, the malware can create a reverse shell. This allows commands to be passed to the malware, or for the malware to retrieve blocks of script or required keys needed for specific functions. Command and control servers can be a remote dedicated web server, or they can be based on one or more documents in legitimate cloud services. For example, the Lampion Portuguese banking trojan used a Google Docs text document as the source for a key required to unlock some of its code. Deleting the document acted as a killswitch. By leveraging Google Docs, the actors behind Lampion were able to conceal controlling communications to the malware and evade reputation-based detection by using a trusted host. More recently the Dridex trojan has been updated to encapsulate communications with TLS, using HTTPS on port 443 to both download modules and exfiltrate data. In addition the Cobalt Strike and Metasploit toolkits often used by ransomware groups use TLS. The report gives other examples of TLS abuse. One problem for defenders is that some malware use TLS over non-standard IP ports, so analysts may underestimate its usage. “TLS can be implemented over any assignable IP port,” the report indicates. “And after the initial handshake it looks like any other TCP application traffic.” The other problem is the abuse of its in cloud and web services like Google Docs, Discord, Telegram, Pastebin and others. “The same services and technologies that have made obtaining TLS certificates and configuring HTTPS websites vastly more simple for small organizations and individuals have also made it easier for malicious actors to blend in with legitimate Internet traffic, and have dramatically reduced the work needed to frequently shift or replicate C2 infrastructure. “Without a defense in depth, organizations may be increasingly less likely to detect threats on the wire before they have been deployed by attackers.” Sophos released the report to coincide with the release of new XGS series firewall appliances that include TLS inspection. Most firewalls include TSL inspection.
<urn:uuid:f29fa062-64dd-4bd4-ad64-cc10de776fb7>
CC-MAIN-2022-40
https://www.itworldcanada.com/article/use-of-tls-to-protect-malware-communications-has-doubled-says-sophos/446361
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00698.warc.gz
en
0.927368
1,073
2.9375
3
Q-CTRL is a global provider of quantum control infrastructure software with a stated goal to make quantum useful. Toward that end, the company recently announced the creation of a new quantum sensing division that will be led by Dr. Russell Anderson. Prior to his earlier assignments with Q-CTRL, Dr. Anderson was with La Trobe University, School of Molecular Sciences, in Victoria, Australia. He obtained his Bachelor of Science in Physics and Pure Mathematics from The University of Western Australia and his PhD from the Swinburne University of Technology. Dr. Anderson and an initial team of 15 scientists have a mission to develop new generations of ultrasensitive quantum sensors using Q-CTRL’s quantum control software along with other technologies. Most of today’s media coverage is about quantum computing, however quantum sensing is equally important. Quantum sensors depend on quantum mechanical properties such as atomic energy levels, varying photonic levels, photonic states, and spins of subatomic particles. These technologies make quantum sensors ultra-sensitive to infinitesimally weak changes in its environment and provide information about the sources of those changes. The use of these atomic properties also makes the sensors very *stable* over time, because the signal comes from the rules of quantum. Q-CTRL plans to explore new modes of quantum sensors using its expertise in advanced software, AI, and signal processing. In particular, it is focused on how robust control and feedback can be used to stabilize quantum sensors against the common sources of degradation experienced in real field settings outside of the laboratory. The company believes there is a large market for quantum sensors in the areas of defense, PNT (positioning, navigation, and timing), mineral exploration, magnetic anomaly detection, climate monitoring, long-term weather forecasting, and space exploration. For example, software-enhanced quantum gravity sensors could image subterranean mineral deposits or even discover new underground water flows. Highly sensitive quantum accelerometers could also be used as navigation aids in locations where GPS might be unavailable or in the event of a catastrophic GPS failure. Types of quantum sensors Here are common types of quantum sensors: - An atomic clock is a quantum sensor. It has hundreds of uses such as GPS navigation, the internet, cell phone calls, and others. - Atom interferometers are used as gravimeters and gravity gradiometers to study physical aspects of the earth such as volcanos, groundwater, mineral deposits, tidal dynamics, and information about what exists beneath the polar ice caps. - Optical magnetometers use atomic spins in vapors, Bose-Einstein condensates, and nitrogen vacancies in solid states. These devices allow mapping applications and navigation locally and at a distance. Q-CTRL uses these to identify and localize emitters in the RF for defense purposes. This was the subject of its Army Quantum Technology Challenge demonstration. - Another solid-state device is a Superconducting Quantum Interference Device (SQUID). These are most commonly used in medical applications. - Quantum optical effects can be used for microscopy, spectroscopy, and interferometry. For example, the National Science Foundation uses squeezed light in its Laser Interferometer Gravitational-Wave Observatory (LIGO) to detect collisions between black holes. - Quantum electric field sensors rely on Rydberg atoms that are created by injecting extra energy into an atom causing the orbit of the outer-most electron to expand, increasing its sensitivity to changes in electromagnetic fields. A Rydberg quantum sensor could be used as an ultra-sensitive broadband receiver or antenna and replace those shown in the photo above. Army QTC demonstration Q-CTRL recently had the opportunity to provide a public demonstration of one of its quantum sensing capabilities at the Australian Army Quantum Technology Challenge(QTC) in Adelaide, Australia. According to the Australian Army website, the QTC is an annual series of events that see teams of Australia’s world-leading quantum scientists and engineers competing to show how quantum technologies can solve important Army problems and deliver unprecedented capabilities. Q-CTRL’s task at the QTC challenge was to locate electromagnetic emitters operating within a defined simulated battlespace. The Australian Army looked to the demonstration results to determine if quantum sensors can detect, locate and identify electromagnetic emitters with greater precision, range and bandwidth. Funds from the Army QTC are part of the $60M of publicly disclosed contracts that have previously been awarded to Q-CTRL’s sensing team and its partners over the last 18 months by the Australian government. These include a project with Advanced Navigation as Q-CTRL’s partner on a hybrid classical-quantum inertial navigation system. Additional project funding for space-qualified quantum sensors included a grant from Cooperative Research Centres Projects and funding from the Modern Manufacturing Initiative. Complexity of Q-CTRL hardware for the QTC The quantum sensing magnetometer that Q-CTRL demonstrated at the Army QTC used warm vapor cells and an optical pump probe technique previously developed by four members of the Q-CTRL team. Their prior research happened to be particularly well-suited for the Army QTC demonstration. The Army QTC challenge required Q-CTRL’s quantum sensors to identify signatures that could potentially be transmitted from unmanned land, air, or undersea vehicles, a scenario where many emission frequencies are in the hundreds of kilohertz. To meet this challenge, Q-CTRL researchers developed quantum sensors with an instantaneous bandwidth of one megahertz and with the capability to detect signals emitted in that range of radio frequencies. Dr. Michael Biercuk, Professor of Quantum Physics and Quantum technology at the University of Sydney and the CEO and Founder of Q-CTRL, said the ARMY QTC demonstration was very successful and Q-CTRL met all requirements of the challenges. “A high instantaneous bandwidth was a key feature needed for the demonstration,” Dr. Biercuk said. “That allows us to modulate a signal in the time and frequency domains and read those signals out directly. The team had previously demonstrated that capability several years ago and published a research paper on the results. However, the QTC demonstration required us to add a real-time localization algorithm combining signals from multiple magnetometers.” Dr. Biercuk further explained that in addition to detecting an emitter’s signature, the team also needed to locate the emitter as well as identify it. The four-corner magnetometer array served that purpose. To determine differences in signals received at each of the four magnetometers, an FPGA was used to calculate the emitter’s real-time location. Dr. Biercuk pointed out another important reason he believed the demonstration was successful, even though it wasn’t a QTC criterion. “There were other ongoing demonstrations near our magnetometers,” he said. “People and robots were walking around near our demonstration and train lines were running directly beneath the convention center. But none of those activities adversely affected the performance of our software augmented sensors.” That demonstrates the quality of Q-CTRL’s quantum sensors used for QTC. It is not only difficult, but also vitally important for quantum sensors to filter out unrelated background noise and suppress what is unrelated so that results reflect only what the sensor was designed to detect. According to the Q-CTRL website, its key markets include an $8B annual earth observation market and a $14B annual collective market for positioning, navigation, and timing (PNT). When asked how quantum sensors would be marketed, Dr. Beircuk reaffirmed Q-CTRL’s previously stated strategy. “We are not in the business of building and selling widgets,” he said. “We sell software licenses, and we sell IP licenses, and we sell capability, but we don’t sell hardware.” With Q-CTRL’s successful Army QTC demonstration now behind it, Dr. Biercuk said the research team is looking forward to following up that success with continued development on another sensing project. Q-CTRL currently has a partnership with Sydney, Australia based Advanced Navigation, a manufacturer of AI-enhanced inertial navigation systems. Q-CTRL is doing development work on an ultra-high-performance quantum PNT system and plans to license the IP to Advanced Navigation since it is an expert mil-spec manufacturer. Advanced Navigation will also handle the system integration and product distribution. “The next stage of our contract with Advanced Navigation calls for Q-CTRL to deploy and validate our platform’s performance under actual field conditions,” Dr. Biercuk said. “We’ve done a lot of laboratory demonstrations and validations under contract. Next, we’ll put the system on a boat and test its performance under real conditions. It’s very exciting for the team.” Q-CTRL has already demonstrated it can build sophisticated quantum sensors. Up until now, the majority of its technical focus has been on agnostic quantum computing software that addresses errors and the instability of NISQ quantum hardware. Q-CTRL currently has three main products: Boulder Opal, Black Opal, and Fire Opal. It is well documented that Boulder Opal and Fire Opal significantly improve quantum computing by mitigating quantum errors, optimizing quantum hardware, and improving algorithmic and logic operations. Black Opal is a sophisticated but simple and understandable quantum training program. Q-CTRL’s proven technical capabilities and its deep expertise with quantum control software points to the sure success of its new quantum sensor division, which will likely serve to significantly advance the science of quantum sensing. I believe the entire quantum ecosystem will eventually benefit from Q-CTRL’s advanced quantum sensing research. Paul Smith-Goodson is Vice President and Principal Analyst for quantum computing, artificial intelligence and space at Moor Insights and Strategy. You can follow him on Twitter for more current information on quantum, AI, and space.
<urn:uuid:d822895c-9826-43a4-a231-52e4d66b1955>
CC-MAIN-2022-40
https://moorinsightsstrategy.com/q-ctrl-launches-new-division-to-create-advanced-software-defined-quantum-sensors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00698.warc.gz
en
0.926587
2,088
2.546875
3
Receiving a 21st century education means preparing students for a global workplace. That preparation means technology resources and digital assets must be incorporated into the classroom and put in the hands of teachers, administrators and students. Local communities, state organizations and the federal government have already started deploying these critically needed items through programs such as Race to the Top grants, ESEA flexibility waivers and the ConnectED initiative. The tech resources provide big opportunities for our schools, but they also create a huge challenge – how do schools keep up with all of it? Thousands of computers. Thousands of students. Millions of dollars. The task can get a little more complicated when you throw in the thousands of other assets school districts need to track – textbooks, uniforms, athletic gear, instruments, furniture, maintenance equipment, and much more. A single district can easily end up having to keep up with hundreds of thousands or even millions of items. Fortunately, the task doesn’t have to be arduous. An asset management system can give districts a complete picture of everything they own and allow them to manage their movements and amounts of assets. These systems feature software, barcodes and scanners to provide up-to-the-minute information and enable users to create reports, perform audits, maintain fund budgets, easily check-in and check-out items, and more. An easy-to-use and effective asset management system can quickly provide answers for these very important questions for school districts: Where are the assets? Can you find each and every asset whenever needed? At minimum, your asset management system should be able to tell you exactly where a particular asset is located. If you use a system with barcodes and scanners, you can easily record every time an asset is moved to a new physical location such as an office, classroom, library or student’s home. While Excel and accounting software can be used to record basic asset information, they are unable to provide real-time location data and are often prone to errors. Who is using the assets? Can you identify the party responsible for the asset? Ideally, the user – either students or staff – moving or checking out the asset should also be noted on the record. That creates an audit trail and ensures accountability for the asset. Plus, not only will the school always know the asset location, you’ll also know who’s responsible for it. What funds were used to purchase assets? Can you verify that funds are being spent properly? Grants require schools to maintain a lot of asset data on both individual assets and the funds used to purchase them. A good asset management system should be able to track all asset purchasing details: purchase order number, funding source, cost, purchase date, and depreciation method. To help your finance department, you should also easily be able to create custom funding sources to separate all awarded amounts and always know the exact total spent and current remaining funds. Have the assets been properly maintained? Can you keep assets in good working order and track the total cost of ownership? According to Deloitte, the gradual deterioration and aging of assets is the top risk associated with owning and operating assets, with nearly two-thirds (62%) of organizations citing it. By tracking completed maintenance, you take advantage of any warranties or service agreements included with the item purchased. This is another area where finance benefits- knowing the amount of money used to maintain and repair your assets provides historical data needed to determine future budgetary needs as well as total cost of ownership. Does use of the assets meet regulations? Is disposition of the asset in compliance? Regular asset audits are simply good stewardship and can help ensure your district never loses funds or is forced to repay money due to non-compliance with funding source guidelines or other regulatory requirements. Without an asset management system, employees need to work from printed lists and manually check off fixed assets. This can be time-consuming and error prone. However, an asset management system makes the process fast and error free with scanning mobile computers and smartphone apps that instantly verify and update information on an asset’s location, use and current condition. Simply scan barcodes and the software automatically updates the records. How is asset disposal handled? Are asset procedures followed for each asset’s lifecycle? Tracking assets is a significant responsibility for any school or district. Detailed information about purchase, maintenance, and physical disposition of assets is required for state and federal grant compliance. Asset disposal should be handled in a timely and responsible fashion. While it may seem difficult to properly dispose of a physical asset, it can be equally dangerous to have non-functioning equipment in storage or simply sitting around your school. A better solution is to assess your physical assets on a regular basis and properly and expediently dispose of obsolete equipment. After disposal, you will still have a full transactional record of the asset’s lifecycle.
<urn:uuid:402f642f-b736-4469-8b91-f57f2691ba0f>
CC-MAIN-2022-40
https://mytechdecisions.com/facility/6-questions-school-districts-must-be-able-to-answer-about-their-assets-and/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00698.warc.gz
en
0.939678
1,013
2.515625
3
Sending and receiving data from the Internet has become such an integral part of our lives that we don’t even think about the nitty-gritty details of how it is transported, until a problem comes up in the form of slow connections, dropped calls, or no response to your requests. When these problems occur, you’d want to fix them right away because they are annoying and frustrating, to say the least. But how do you know what’s the cause of the slow or dropped connections? Without knowing the cause of the problem, how can you fix it? That’s why in this article, we’ll take you on a trip to the basics, so you’ll know how to identify the connection issues within just a few minutes. What are Data Packets? When you read a piece of news online, send or receive emails, check your messages on social media, shop, bank, or do just about any online activity, what you’re essentially doing is sending and receiving small packets of data. The underlying protocol that enables you to send and receive messages is called TCP/IP and this protocol divides every message into small chunks called data packets. When these packets arrive at their destination, the TCP/IP protocol of the destination system reassembles them to get the original message. When you examine the data packet closely, you’ll find that it contains not just the data that you send and receive, but also other identifying information that helps the receiving protocol to reassemble the packets in the right sequence. Here’s some of the other information that these data packets contain. The header contains control information and instructions about the data contained in the packet. It includes, - Length of the packet - Synchronization bits that help to match the packet and the network through which it is sent. - Sender’s IP address - Receiver’s IP address - Protocol that defines the type of packet that’s being transmitted. For example, emails, videos, etc. - Packet number to identify the sequence Typically, the header takes about 96 bits. ‘Payload’ is the actual data or the body of the packet. If the length is of a fixed size, then blank data bits may be added to pad it up to reach the prescribed length. The ‘footer’, also called the ‘trailer’, contains a few bits to tell the receiving device that it has reached the end. Most data packets also have bits for error checking, with the most common method being the Cyclic Redundancy Check (CRC). Thus, this is how communication takes place between any two computers on the Internet. Reasons for Packet Loss As you may have guessed, when one or more packets fail to reach the receiver, it’s called packet loss. This is a potentially serious situation that impacts the performances of websites and applications and can impede access to them. It results in slow connections, dropped calls, no response to your requests, and more. Below are some of the causes of packet loss. Network congestion is a situation where the network has more packets than it can deliver and if the connection is extremely slow, some of the data packets are discarded because there is a limit to how much a network can store. Further, the network may discard the remaining to keep pace with the sending rate. The underlying hardware must be competent enough to handle the rate of transmission of these data packets. Outdated firewalls, networks, and routers can slow down the network greatly and increase congestion. In turn, this leads to lags and resultant packet loss. Errors and Bugs Errors and bugs in software can impact the network and the way it routes data packets. So, if there are any untested or unresolved bugs in your software application, it can cause changes in network behavior, that could eventually cause packet loss. One form of cyberattack called the packet drop attack causes the loss of data packets. During this attack, a hacker takes charge of the routers and intentionally drops a few packets to disrupt the communication. While these are not an exhaustive list of reasons for packet loss, it sure gives you an idea of the possible ways by which packets can fail to reach the destination. So, the next question is how do you test for packet loss on Windows? Testing for Packet Loss on Windows The first step is to identify that there is a packet loss. Some of the possible ways to identify them are: - Slow speed of transmission - High latency - Lag while playing video games or watching streaming content - High levels of buffering while watching multimedia content - Dropped voice and video calls when connecting through the Internet While these signs are indicators of a packet loss, the most foolproof way to recognize a packet loss is through the TCP/IP protocol. While sending data through the TCP/IP protocol, the sender keeps a track of the number of packets and their sequence. The receiver, on the other hand, acknowledges the receipt of each packet. Based on these acknowledgments, the sender can estimate if there has been a packet loss and if so, what percent of packets have been lost. So, you can know the rate of packet loss from the sender’s TCP/IP protocol and to get this information, you can use two tools that come built into the Windows operating system. These two tools are – ipconfig and ping. Though both these tools may seem similar, in reality, they gather data in different ways. Let’s now take a detailed look at both these tools. Ipconfig is an acronym for Internet Protocol Configuration. This is a console command that displays the current configuration of TCP/IP. It is often used with different parameters to get an idea of the health of the IPv4 and IPv6 addresses, gateway adapters, and subnet masks. How to Run ipconfig? Below are step-by-step instructions on how you can run ipconfig. - Open the Run dialog box by pressing the Windows and “R” key - Type “cmd” and press Enter - Alternatively, you can navigate to the start menu and click on the command prompt - When the command prompt window opens, type “ipconfig” and press Enter - This will display information about your default gateway, mask, and other pertinent information. You can also use this tool with its many parameters for specific actions or to get the results you want. Some of the possible options are: Accordingly, you can decide which ones are appropriate for you. Ping is a useful utility that allows you to check the health and reachability of the receiving device. Essentially, it sends an ICMP echo request and expects to get an echo reply from the receiver. Based on the response and time taken, you can identify the cause of delays. How to Run ping? Like ipconfig, you can run ping also from the command prompt. However, note that you can use this command only if TCP/IP is installed as a component in the network adapter. It also supports a host of parameters that you can use to get specific information. Some of those parameters are: /t - Sends the echo request messages until they are explicitly interrupted with the CTRL + ENTER command /a - Provides reverse name resolution for the destination IP address /n <number> - Specifies the number of echo request messages the system should send. The default value is 4 /l <size> - Specifies the length of the data field in the messages. This value is in bytes and the default is 32. /f - This parameters adds the "Do Not Fragment" flag in the IP address, so routers don't fragment the message /l <seconds> - Specifies the maximum time-to-live for each message. The default value is set by the host and the maximum is 255. /v <number> - Gives the type of service and it is specified as a decimal value from 0 to 255 /r <number> - Records the number of hops in the path /s <time> - Specifies the Internet timestamp for the message /w <seconds> - Specifies the amount of time to wait for the echo reply message /4 - Uses only IPv4 /6 - Uses only IPv6 /? - Displays help With these different parameters, you can get precise information about the time taken to send and receive messages and the possible source of the problem. Testing With ipconfig and ping For the optimal testing for packet loss on Windows, you have to use both ipconfig and ping. Check the Default Gateway’s Availability Firstly, use the ipconfig command to get the IP address of the default gateway. Next, use ping to send a certain number of ICMP messages to the gateway. Make a note of the rate of packet loss, the time taken for transmission, etc, as this information can help you zero in on the source of the problem. With this series of actions, you can know if there’s a problem with your gateway, routers, and adapters. When there are no problems, it’s time to check the external network. Pinging the External Network The idea behind pinging the external network is to understand if packet loss is due to network congestion. It has become a standard practice to send a ping message to google.com because these servers score high on stability and reliability. From the above results, it’s clear that there was no network congestion and no packet loss. To get accurate results, consider running both these tests at different times of the day. It even makes sense to run them from different places to understand the strength of Wi-Fi signals, possible network congestion, faulty hardware equipment, and more. Below are some possible scenarios and the conclusions you can make them. Remember, this is not an exhaustive list, but one that aims to give you an idea of how to use ipconfig and ping to test for packet loss. Internet cabling or Wi-Fi strength When you run the same tests from different parts of your house, you may experience differential rates in packet loss. This could indicate that the Internet cabling is flawed. On the other hand, if you’re using wireless networks for connectivity, it could indicate that the strength of the Wi-Fi network varies greatly in different parts of your house. Accordingly, you can decide the optimal place for setting up your work desk or move your router to a different part of the house. When the rate of packet loss varies greatly from one time of the day to another, it could indicate network congestion. It is likely because at some times of the day, more people are accessing the Internet and as a result, the packets are getting queued and eventually lost or discarded. This information can help you to better plan your Internet usage, and possibly move the Internet-heavy tasks to the lean times of the day when there is no network congestion. If the packet loss rates vary while trying at the same time and location, but with different devices, it is an indication of faulty hardware. Outdated versions or models can also be the cause of these differences. When some applications are slower than others, ping to the specific application. For example, if it takes a lot of time for The New York Times page to load, do a ping to nytimes.com to see if there’s any problem. Sometimes, the problem can lie with the browser or even the receiver’s network. Likewise, software bugs can cause some applications to load more slowly or worse, even disrupt communication. All the data that you send and receive from the Internet are transmitted by the TCP/IP protocols that divide the data into small packets and send them. For every packet sent, the protocol waits for an acknowledgment from the receiver, and the packet is deemed to be lost if there’s no acknowledgment from the receiver. Using this feature, we can determine the packet loss rate and the possible reason for the same. Some of the scenarios mentioned above can help to determine the possible cause of the problem, so you can fix it right away to ensure an uninterrupted connection to your favorite online applications.
<urn:uuid:d3f39d02-b5fd-4887-8439-1bd0054ccb17>
CC-MAIN-2022-40
https://www.netadmintools.com/testing-for-packet-loss-on-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00698.warc.gz
en
0.914598
2,581
3.359375
3
Today’s hackers are equipped with a high-tech, high-powered arsenal of cyberattack weaponry. Worms, trojans, bot-networks, ransomware, and brute force password crackers are just a few of the software tools they use to steal sensitive data. Worse than that, hackers also employ hardware that can steal credit card information from scanners at gas pumps and even directly out of a victim’s wallet. While it may be overwhelming to consider the amount of sophisticated hardware and software hackers use to exploit weaknesses, the truth is, they often won’t have to go to those lengths. Before attempting to code and deploy a trojan or bot-network, hackers will use simple means, such as social engineering or even making a few phone calls with basic information that could be gathered from Google. Furthermore, even the most complex hacking operations start out with a few simple emails, friend requests, or chats. The fact is, hackers don’t primarily target vulnerabilities in software or hardware—they target vulnerabilities in people. If you own a business, hackers will continuously try to compromise your cybersecurity by initiating malicious contact with your employees. In these cases, investing in the best cybersecurity software and hardware will not be enough. Safeguarding your business’ sensitive data on all fronts depends on creating and managing a strong cybersecurity awareness program. A security awareness program is a way to ensure that everyone at your organization has a working knowledge of cybersecurity and a sense of responsibility. Security awareness training programs are important because they reinforce that security is the responsibility of everyone in the company. This article will offer a few steps to get your business’ cybersecurity awareness program jump started. Assess the Current Cybersecurity Baseline The IBM 2017 Threat Analysis Index reported that nearly half of all email is spam, with a significant portion containing malicious code. Furthermore, The Symantec 2017 Internet Security Threat Report found that phishing had become the number one means of delivering malware—but how many of your employees could recognize a sophisticated phishing attack? Before discussing cybersecurity in general, it is advantageous to gauge your organization’s knowledge baseline. Establishing baseline assessment scores related to phishing susceptibility and cyber security knowledge levels allow you to mark your starting point and measure progress. Have your IT team take note of the incidences of attempted and successful cyber-attacks your organization currently experiences before you begin employee awareness training. After implementing training and education, you should see a reduction in employee-driven cybersecurity incidents over time, which is a good indicator of program success. Deliver Consistent Internal Communications and Content Just like meeting financial and client retention goals, effective cybersecurity needs to become a regular part of the conversation at your organization. Fostering this communication requires a combination of group and individual education and training that can be accounted for and measured. For group training, consider using company-wide emails, presentations, newsletters, or working lunches. For individual training, consider the following formats: - A security handbook - Online training modules and quizzes - A cache of Essential articles and resources - Role-based guidelines (e.g., what each team needs to know about security) - Training programs (both for new hires and ongoing employee education) - One-on-one or small group sessions with IT leaders - Brief cybersecurity videos Create and Enforce New Control Levels Even if a phishing attack is successful at one level, it can still be contained and stopped in its tracks. Creating a system of control levels encourages communication between departments and helps support awareness of suspicious activity. Controls ensure that people and systems are only able to do what their roles dictate them to do with the appropriate approval. For example, a common cybercrime tactic involves calling a business’ support team and requesting an accounting change. By forwarding a request like this to managers or enforcing a unique passcode system to safeguard the account, you’ve included another layer of security to contain and repel cyberattacks. Take the First Step Repelling cyberattacks is a reality of doing business in this day and age. You must safeguard your digital assets with training and best practices the same way you would protect a physical storefront. Keeping your organization up-to-date in cybersecurity best practices can be a daunting task. To stay safe and ahead of the curve, reach out to an IT solutions partner like Bizco Technologies. Since 1994, Bizco Technologies has operated with the core philosophy of helping small businesses grow by implementing the right information technology and cybersecurity solutions. If you’d like to learn more about implementing a cybersecurity awareness training program, don’t hesitate to give us a call. You might also like Bizco Technologies is happy to announce our Corporate level partnership with the Upper Tampa Bay... With the improved technology today, many organizations continue to embrace remote working for... When planning to procure IT products and services, it is imperative to research intensively for the...
<urn:uuid:971734b3-8ac2-4c44-ba0a-26bdb8fef5fa>
CC-MAIN-2022-40
https://blog.bizco.com/3-steps-to-jump-start-a-cybersecurity-awareness-training-program
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00698.warc.gz
en
0.928792
1,013
2.6875
3
Today, we are experiencing a shift in manufacturing. Companies are shifting their focus from achieving economies of scale to economies of speed to achieve competitive advantages. However, realizing this is becoming increasingly difficult. It is due to growing product varieties, process complexities, stochastic machine behaviors on the shop floor, and supply chain disruptions. Engineers should act to the challenges much faster than ever before. They have to analyze several process variables and factors to quickly and confidently make a decision. Here is where the unique capabilities of AI can tremendously help companies realize the desired level of economies of speed. Using AI, engineers can analyze millions of lines of shop floor data from hundreds of process variables to make faster, better, and confident decisions. For example, AI can help shop floor engineers quickly find the root causes of the problems and prescribe appropriate actions. It, in turn, reduce the response time and leads to more effective decisions and actions. AI can even predict the future factory dynamics based on several variables and prescribe concrete proactive actions to engineers. Imagine a scenario where the engineer, before entering the factory in the morning, already knows what problems will occur during the day and what actions they need to be prepared to take. Do they feel so empowered? They will. It is because of two reasons. First, they will not be surprised by the upcoming problems and will be ready to act proactively. Second, they can be confident that the actions will effectively solve the problem without any ambiguity. We can help factories to make that happen in reality using AI. We estimate the benefits of using AI solutions is 30-50% shop floor productivity gains which directly adds to the bottom line. How can AI be used on the shop floor? Consider a shop floor in a structure of three levels. The bottom is the process focus, the middle is the machine focus, and the top is the factory focus. We can use AI in all three levels to optimize factory performance in a fraction of a time. The process level is where the tool in the machine interacts with the raw material to produce a product. At this level, we can use AI to control and optimize the process by studying the relationship between raw materials and the tool. For example, in a CNC machine, AI can achieve adaptive machining capabilities, i.e., adapting the tool parameters (such as cutting force and feed rate) to the variations in the incoming raw materials to optimize the performance. It can increase the tool life and reduce power consumption. At the machine level, we can use AI to optimize the performance by studying its overall behavior using data collected from different sensors such as temperature, power, pressure, etc. For example, AI can help achieve condition-based and predictive maintenance of the machine, thus increasing its technical availability. At the factory level, we can study the interactions between different shop floor elements such as machines, buffer, and material handling equipment from a flow perspective. The aim is to create a smoother, faster, and swift flow of products in the factory. We can use AI to optimize factory planning, scheduling, resource allocation, bottleneck elimination, and early identification of quality defects to optimize factory performance. A good tip here is to think about automating lean-based flow improvements using AI. We develop AI solutions for different use cases based on customer needs. We use market-leading tools and platforms for doing this. We develop customized architectures that can extract the data automatically from AWS, Azure, Snowflake, database, or data lake. Then, we run the AI algorithms in the cloud and provide real-time insights and visualizations (e.g., using Power BI, QlikSense, Tableau) and actions to shop floor engineers (either at the back-office computers or mobile phones). We customize our AI solutions to enable shop floor engineers at all skill levels to consume AI insights to support faster and more effective decision-making. Thus, the companies can bring AI closer to the manufacturing engineers who needed it the most in day-to-day decision-making. How to identify relevant use cases? Identifying the manufacturing use cases at the process, machine, and factory levels is not a one-man decision. We need both AI and manufacturing engineers’ inputs to select feasible and impactful use cases. It is teamwork. AI engineers have a good understanding of AI’s strengths, weaknesses, and limitations. Manufacturing engineers have extensive knowledge of their manufacturing processes and factory dynamics. They can select high-impact manufacturing problems that benefit from having an AI-based solution. Sitting across a table, both can discuss the manufacturing problems and AI suitability to solve these problems effectively. For example, there might be instances where manufacturing engineers select a high-impact problem. But AI might not be a potential solution to the problem due to the technical difficulties encountered in computing and its implementation. Alternatively, there can be situations where there are medium impact problems, which may benefit from an AI solution in the long run. We can identify those possibilities and limitations only through a cross-functional team of manufacturing and AI engineers. Moreover, such a cross-functional team setup can ensure that the deep insights generated through AI translate into a measurable impact on the shop floor. Which use case to choose first for AI implementation? What are some recommendations to implement AI successfully? There are four practical recommendations for manufacturing companies that are starting their journey in AI. These will be helpful to set the right expectations and mindset before a company takes its first AI project. - Chose AI for high-impact high frequent problems - Problem Scoping >> Data sets - Augmentation is necessary - Starting small is beneficial Understanding that AI is not a silver bullet: Every shop floor problem does not require AI. Sometimes simple solutions do work. Moreover, one has to understand that every manufacturing problem can be unique. Each one of them might need a unique AI solution. For example, an AI for predictive maintenance for a roller bearing in a CNC machine might have a different architecture from an AI for a roller bearing in an induction motor. It is because of the differences in geometry of the machines and different machine dynamics. So, the same problem in two different setups can have two unique AI solutions. Our unique problem scoping framework will help to select the right set of problems. Problem >>> data sets: A common question we always encounter from manufacturing companies is, “I have big data sets. I want insights from the data”. This practice might not be impactful. Having a data focus might lose their attention in solving high frequency and high impact problems. So, selecting the right problem should always be the first step. Manufacturing engineers should spend significant time in this step. Figuring out the problem and working backward towards the AI solution is the most optimal approach. We bring extensive manufacturing and AI implementation experience to help clients appropriately conceptualize and scope the shop floor problem. Augmentation: Manufacturing engineers should always augment AI results. AI tends to learn the patterns from historical data and provides probabilistic-based predictions of the future. In this process, there are chances that AI might have missed learning certain aspects from the data, and the forecasting may not always be accurate. In one of the use cases, we developed an AI to predict the bottlenecks on the shop floor. The accuracy of the AI was 90% on average after several iterations. Now the question is, how can the accuracy gap of 10% be filled? That is where manufacturing engineer’s domain knowledge gained through years of experience is valuable. We help realize augmentation using our unique humans-in-the-loop AI framework. Start small: The initial AI projects can be small. Having a wide-scoped project can be complex and can take significant time to complete. As a result, there can be a delay in realizing the benefits of the AI solution. It is good to have a small project (a good tip here is to break down a big project into several small ones) at the start. In that project, develop and quickly implement the AI solution, and realize the benefits. Then, one can gradually increase the scope by including other complexities. Such practices will help companies to learn about what it takes to build, implement and maintain the AI over time. Also, it gives enough time for manufacturing engineers to get experience on how to consume AI insights and feel empowered for a continued AI journey. We help clients in all stages of the AI lifecycle and scale them effectively and efficiently on the shop floor. I would love to hear your thoughts and dreams about how you want to implement AI on the shop floor. Feel free to reach out to me and, we can jointly realize your AI dreams. Mukund works in the manufacturing sector, mainly advising clients on topics related to digital manufacturing, data analytics, artificial intelligence, and operations transformation. He has extensive experience working with major automotive companies in India and Sweden including original equipment manufacturers (OEMs) and suppliers. His educational and professional background integrates technical expertise as computer scientists with the manufacturing expertise of engineers. This ensures that the deep insights generated through AI translate into real measurable impact in an organization. He is passionate about transforming manufacturing operations using IIoT, AI, data, insights, and actions. Contact details: email@example.com
<urn:uuid:ad3f408c-0e5a-474d-876c-5a18fc69b0d3>
CC-MAIN-2022-40
https://www.capgemini.com/se-en/2021/09/ai-on-the-shop-floor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00698.warc.gz
en
0.934706
2,244
2.875
3
Tuesday, September 27, 2022 Published 5 Months Ago on Wednesday, May 11 2022 By Karim Husami Technology plays an influential role in shaping the political landscape. In many ways, technology influences politics: it is a tool for political actors on the campaign trail, considering it a divisive political topic and a possible landmine that can take political aspirations. The impact of technology on politics is a tool for political actors such as governments, politicians, and other organizations to better recognize and engage with people and mobilize public members to their cause by broadcasting political messages. It is a seen as a live example for decades now; every time we scroll past a political ad on Facebook, watch a campaign commercial on TV, or acquire a flyer in the mail, amongst other ways. In addition, two of the most powerful methods for using technology as a tool include digital media and data collection. For example, the impact of technology on current politics can be seen when U.S. President Barack Obama advanced to become president mainly due to an unprecedented effort to gather data about critical voters’ voting patterns and demographics. Another example is Donald Trump becoming president mainly because of technology’s impact; more specifically, social media and Facebook were flooded with information through posts convincing voters to choose him. Data collection has always been a crucial aspect of information and political campaigns, but Obama’s election emphasized the possibility behind data collection. In the following years, this new strategy focusing on granular data collection regarding everything from household income, past voting behavior, and Internet browsing patterns has become an essential part of any modern political campaign that aims to be successful. The data is used to develop marketing campaigns to promote political messages, develop donor relationships to request extra donations, and recognize and help new voters get to the polls. The impact of technology on political communication is found by establishing authenticity with specific voting blocs. As in, Beto O’Rourke created a presidential campaign on the odd use of social media. Whether he was presenting local cuisine during a campaign stop or sharing a video of himself, or sharing his thoughts about the rigors of being on the road, O’Rourke expected his reliance on technology would echo with digital natives conditioned to the same media. The Chinese are creating their own type of internet, as in expanding their network to have one called “splinternet,” which will determine alternative cyberspace where multiple developing countries are likely to sign up. However, political organizations and some political candidates have strong standpoints on topics connected to technology, such as data privacy. Many politicians argue strongly about ensuring the rights of individuals to privacy. Some of the events were impacted by technology, such as the January 6th Capital Riot. This event emphasized the serious impact technology has on politics for politicians to achieve their agendas and manipulate the public’s opinion. Governments and organizations turned technology into a playfield to improve their engagement with the public and rally members of the public in support of their cause. The public in many countries is divided on how technology affects politics in their societies. An 11-country median of 44 percent says the increasing use of the internet has had a good impact on politics, but 28 percent feel that effect has been wrong – and this balance of opinion is most damaging in Tunisia, Jordan, and Lebanon. Adults in these countries also feel that access to technology has had various negative and positive impacts on their fellow citizens. Taking the positive side, a median of 78 percent says access to mobile phones, the internet, and social media made people more knowledgeable about events. Answering the question about social media’s impact on the broader political process, majorities in nine of these 11 countries say they have increased the ability of ordinary citizens to take part in the political process. In addition, the latest example is Twitter, which was used to ban politicians from it, especially Trump. Then it was bought by Elon Musk, who calls himself a free speech absolutist, but that could be put to the test in the Middle East, where critics say authoritarian governments use the platform to track opponents and spread disinformation. Musk said he would reconsider the platform decision to remove Trump from it to put him back on. The impact of technology on global politics is understood when technology is being used and controlled as a weapon. There is a new map of power in the modern world that is no longer defined by geography, by control of territory or oceans, but rather by control overflows of people, goods, money, and data and by exploiting the connections technology creates. In this way, every connection between nations – from energy flows to IT standards – becomes a tool of geopolitics. We are witnessing a fragmentation of globalization as countries use state regulations, subsidies, export controls, entity lists, and localization to access essential technologies themselves while simultaneously maintaining the access of others. States risk being compelled when the danger of this technological competition escalates out of control and threatens global security, which is the most worrying. The Australian national and Wikileaks founder Julian Assange case, led to a geopolitical situation. After leaking many classified documents, political and diplomatic tensions arose, turning Assange into a public figure and hero to hundreds of people and countries as well as a wanted citizen who committed treason to others due to media manipulation changing public opinion. Under extreme pressure, he turned himself to London authorities but appealed his extradition request to Sweden. After many appeals and due to the heavy media manipulation of information that focused on Assange committing treason, Britain decided it would honor the extradition request, reminding Assange to escape to the Ecuadorian embassy in London in 2012. In Britain, the U.S. won its bid for his extradition to the country in December 2021. People can’t engage in politics without engaging in technology, and they can’t use technology without engaging in politics. Technologies impact every aspect of our lives to the point that they started affecting our choices in political matters, among others. Politics and technology have always been connected, dating as far back as the Industrial Revolution, which led to some of the most seminal issues in American history, such as labor protections. But in recent years, they’ve become more or less inseparable. Inside Telecom provides you with an extensive list of content covering all aspects of the Tech industry. Keep an eye on the Technology news section to stay informed and updated with our daily articles. The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:b70412b3-da20-4aca-b66c-3ab022cb41c8>
CC-MAIN-2022-40
https://insidetelecom.com/how-does-technology-impact-politics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00098.warc.gz
en
0.962796
1,421
3.046875
3
Researchers have identified three major vulnerabilities in the Linux kernel that have existed for more than a decade. According to cybersecurity firm GRIMM, these bugs could allow attackers to elevate their privileges from basic to root, opening the door to data theft, malware (opens in new tab)and ransomware distribution, escalation of privilege and DDoS attacks, Bleeping Computer (opens in new tab) reported. The vulnerabilities (CVE-2021-27365, CVE-2021-27363 and CVE-2021-27364) have since been fixed and patches for the mainline Linux kernel became generally available on March 7. Linux users have been urged to patch their systems immediately. Despite their relative severity, the vulnerabilities aren't particularly easy to exploit, requiring local access to the target device. This means attackers would either need to access the device physically or chain the Linux bugs with other vulnerabilities. Detailing his findings in a blog post (opens in new tab), GRIMM researcher Adam Nichols said that the vulnerable scsi_transport_iscsi kernel module is not loaded by default, and explained what that means: "The Linux kernel loads modules either because new hardware is detected or because a kernel function detects that a module is missing," he wrote. "The latter implicit autoload case is more likely to be abused and is easily triggered by an attacker, enabling them to increase the attack surface of the kernel." "On CentOS 8, RHEL 8, and Fedora systems, unprivileged users can automatically load the required modules if the rdma-core package is installed. On Debian and Ubuntu systems, the rdma-core package will only automatically load the two required kernel modules if the RDMA hardware is available. As such, the vulnerability is much more limited in scope." Bleeping Computer further explained that the bugs could be abused to bypass various Linux security features designed to block exploits, including the Kernel Address Space Layout Randomization (KASLR), Supervisor Mode Execution Protection (SMEP), Supervisor Mode Access Prevention (SMAP) and Kernel Page-Table Isolation (KPTI). “The bottom line is that this is still a real problem area for the Linux kernel because of the tension between compatibility and security. Administrators and operators need to understand the risks, their defensive options, and how to apply those options in order to effectively protect their systems,” Nichols concluded. - Make sure to check our rundown of the best online security suites (opens in new tab)
<urn:uuid:9e2f20ba-5751-4a64-be18-450463da9736>
CC-MAIN-2022-40
https://www.itproportal.com/news/decade-old-linux-kernel-bugs-are-putting-devices-at-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00098.warc.gz
en
0.909908
509
2.59375
3
Public key cryptography is one of the fundamental technologies used for exchanging information on the Internet securely. It’s used by Web browsers to create secure connections to Web sites, and by e-mail security gateways and applications to encrypt messages. Its strength lies in the fact that it can be used to exchange encrypted information between two parties that have never communicated together before and have therefore never agreed on a secure way of exchanging messages. To understand how public key cryptography works, let’s consider secure communications in general. One way to send a confidential message to someone is to agree on an obfuscation system in advance—like substituting each letter in the message with the next one in the alphabet. A more sophisticated method would be to use encryption software which uses an encryption algorithm, known as a cipher. The message (known as plaintext ) is entered and passed to the algorithm along with a key—a string of characters that you supply—comes out in encrypted form (known as ciphertext.) This unintelligible jumble of characters can only be converted back to the original plaintext by passing the message through the same cipher and supplying the same key. This is known as a symmetric encryption system. An interesting thing about this system is that its security doesn’t rely on the cipher itself being secret. The only thing that needs to be kept secret is the key. (In fact you could argue that the more widely known and understood a cipher is, the more you can trust it to be effective—proprietary algorithms that aren’t open to public inspection by independent experts could have secret “backdoors” built in that allow anyone in the know to decrypt messages without the key.) One problem with symmetric systems is that to send someone a message securely you have to be able to give them the secret key first without anyone else seeing it. Why is that a problem? Imagine a situation in which you were traveling abroad and had to e-mail some valuable corporate information back to a colleague without the authorities in the country you are in getting their hands on it. If you hadn’t already agreed on a key before you went traveling then you’d be stuck: you couldn’t send an encrypted message without first supplying a key, and you’d have no way of e-mailing a key securely. Of course you could make a phone call to tell your colleague the key you intend to use, but what if the conversation is overheard or the phone line is tapped? How Public Key Cryptography Works The solution is to use an ingenious cryptographic system called public key cryptography (PKC). The fundamental part of PKC is that the encryption key is split into two separate keys—let’s call them key A and key B. If you encrypt some plaintext with key A, you can’t decrypt the resulting ciphertext with key A to get back to your original plaintext. To decrypt ciphertext produced using key A, you need to use key B. In fact—and this turns out to be very useful—the reverse is also true: if you encrypt some plaintext with key B, you can’t decrypt it again with that key. You can only decrypt it with key A. If you encrypt a message with one key in the key pair, you can only decrypt it with the other one. So if you want to be able to receive encrypted messages from anyone who wants to contact you, you first need to generate a key pair (using suitable PKC software.) One of these you designate your private key, which you keep secret. But here’s the clever bit: the other key you designate as your public key, and this doesn’t have to be kept secret. In fact the reverse is true: it should be distributed as widely as possible so that anyone who wants it can easily get it. To send that message to a colleague now, all you need is their public key. There are a number of ways that you might get might get hold it, which we will look at in a future article. The important thing is that this public key doesn’t have to be kept secret, so even of you called your colleague and the phone line was being tapped it wouldn’t matter. Anyone overhearing the conversation and writing down the public key couldn’t use it to decrypt the message that you encrypt with it. Now remember how we mentioned earlier that your private key can also be used to encrypt a message that can only be decrypted using your public key. You may well ask what would be the point of encrypting a message if the key needed to decrypt it is publicly available. The answer is quite surprising. Let’s imagine you receive a message from your colleague, and you believe that it is encrypted with his private key. If you use their public key to decrypt the message successfully then that means that the message must indeed have been encrypted using your colleague’s private key (which only your colleague has access to.) No other key could have been used to encrypt the message. So encrypting a message with a private key acts as a digital signature: If you can decrypt a message with John’s public key, it must have been encrypted using John’s private key, so it must have been written by John. Using double encryption, it’s possible to send an encrypted, digitally signed message to anyone who has made their public key available. Here’s how: Imagine you want to send a message to your colleague Bob at head office. First you write your message (the plaintext) and encrypt it with your private key to produce the ciphertext—a message which is effectively digitally signed as coming from you and no-one else. You then encrypt this ciphertext a second time using Bob’s public key. Finally, you e-mail the resulting gobbledegook to Bob. When Bob receives this message he decrypts it using his private key to get the ciphertext message that you encrypted with your private key. Bob then decrypts this using your public key. If he gets a message (rather than gobbledegook) he knows that the message definitely came from you (because otherwise he couldn’t have decrypted it with your public key) and he knows that no one else could have read the message, because no one else has his private key. PKE Has Its Limits Are there any limitations to the PKE approach? The answer to this question is yes. Firstly, any encrypted message is only as strong as the cipher that is used to encrypt it. If a weakness is discovered in the cipher such that you no longer need a key to decrypt the message or it becomes possible to work out the key (directly or indirectly) from the contents of the ciphertext then clearly the system is not secure. Another caveat is that any key-based encryption system is susceptible to a bruteforce attack—methodically trying every possible key until the correct one is found. Modern encryption techniques rely on the fact that if there is a sufficiently large keyspace (meaning there are a sufficiently large number of possible keys,) it is likely to take hundreds of millions of years to find a key by brute force using the computers that are currently available. But as computers become more powerful, the length of the keys typically used may need to be increased to ensure that the chances of successfully brute forcing a key remain tiny. It’s important to remember that any encrypted message is never completely safe from a bruteforce attack: someone might guess the correct key with their very first guess. It’s just that with a strong cipher and a long key the probability of that happening—or that they hit upon the correct key within a thousand years—is vanishingly small. The final problem that’s worth mentioning is the problem of key management: how do you get hold of someone’s public keys, and how can you be sure that it is the public key belonging to the person you think it belongs to? If you send a message to Bob using the public key which you think belongs to Bob but actually belongs to Carol, then Bob won’t be able to read it. More worryingly, if Carol manages to get her hands on the message she will be able to read it, even though you intended it for Bob’s eyes only. But despite these potential problems, it’s fair to say that PKE has revolutionized the way that secure communications are carried out. In the next piece in this series, we’ll be looking at key management and how PKE is used in the real world to provide commercial and open-source secure e-mail systems.
<urn:uuid:2cb143a0-8c81-49c3-b14b-5c2cfed55edf>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/standards-protocols/public-key-crypto-for-enterprise-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00098.warc.gz
en
0.949725
1,789
4.03125
4
A Purdue University data science and machine learning innovator wants to help organizations and users get the most for their money when it comes to cloud-based databases. Her same technology may help self-driving vehicles operate more safely on the road when latency is the primary concern. Somali Chaterji, a Purdue assistant professor of agricultural and biological engineering who directs the Innovatory for Cells and Neural Machines [ICAN], and her team created a technology called OPTIMUSCLOUD. A benefit for both cloud vendors and customers The system is designed to help achieve cost and performance efficiency for cloud-hosted databases, rightsizing resources to benefit both the cloud vendors who do not have to aggressively over-provision their cloud-hosted servers for fail-safe operations and to the clients because the data center savings can be passed on them. “It also may help researchers who are crunching their research data on remote data centers, compounded by the remote working conditions during the pandemic, where throughput is the priority,” Chaterji said. “This technology originated from a desire to increase the throughput of data pipelines to crunch microbiome or metagenomics data.” This technology works with the three major cloud database providers: Amazon’s AWS, Google Cloud, and Microsoft Azure. Chaterji said it would work with other more specialized cloud providers such as Digital Ocean and FloydHub, with some engineering effort. It is benchmarked on Amazon’s AWS cloud computing services with the NoSQL technologies Apache Cassandra and Redis. “Let’s help you get the most bang for your buck by optimizing how you use databases, whether on-premise or cloud-hosted,” Chaterji said. “It is no longer just about computational heavy lifting, but about efficient computation where you use what you need and pay for what you use.” Handling long-running, dynamic workloads Chaterji said current cloud technologies using automated decision making often only work for short and repeat tasks and workloads. She said her team created an optimal configuration to handle long-running, dynamic workloads, whether it be workloads from the ubiquitous sensor networks in connected farms or high-performance computing workloads from scientific applications or the current COVID-19 simulations from different parts of the world in a rush to find the cure against the virus. “Our right-sizing approach is increasingly important with the myriad applications running on the cloud with the diversity of the data and the algorithms required to draw insights from the data and the consequent need to have heterogeneous servers that drastically vary in costs to analyze the data flows,” Chaterji said. “The prices for on-demand instances on Amazon EC2 vary by more than a factor of five-thousand, depending on the virtual memory instance type you use.” Chaterji said OPTIMUSCLOUD has numerous applications for databases used in self-driving vehicles (where latency is a priority), healthcare repositories (where throughput is a priority), and IoT infrastructures in farms or factories. OPTIMUSCLOUD: Using machine learning and data science principles OPTIMUSCLOUD is a software that is run with the database server. It uses machine learning and data science principles to develop algorithms that help jointly optimize the virtual machine selection and the database management system options. “Also, in these strange times when both traditionally compute-intensive laboratories such as ours and wet labs are relying on compute storage, such as to run simulations on the spread of COVID-19, throughput of these cloud-hosted VMs is critical and even a slight improvement in utilization can result in huge gains,” Chaterji said. “Consider that currently, even the best data centers run at lower than 50% utilization and so the costs that are passed down to end-users are hugely inflated.” “Our system takes a look at the hundreds of options available and determines the best one normalized by the dollar cost,” Chaterji said. “When it comes to cloud databases and computations, you don’t want to buy the whole car when you only need a tire, especially now when every lab needs a tire to cruise.”
<urn:uuid:440cf5fd-447f-49b5-9d7c-9c5ae7c44af8>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/06/08/optimuscloud-cloud-hosted-databases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00098.warc.gz
en
0.914777
885
2.5625
3
A good web browser makes getting online and exploring the Internet quick, easy and safe. Browsers can only consistently protect you if they’re regularly updated, however. It can be confusing to figure out the version of the browser you’re running, and confusing to update too. For example, last week Google recommended that everyone update their Chrome browser because of a dangerous zero-day exploit. Go to Updatemybrowser.org to see what version of your web browser you’re running, and update it today. It’s not just good to update your web browser, it’s necessary—updates keep your browser running quickly, and safely by keeping up with bugs and possible hacks. If you need further convincing read on to learn more about Update my Browser, and how it can help you today. See your browser version—and update it Just by going to Updatemybrowser.org, you can see what browser you’re using, if you’re unsure, and which version. Update my Browser lists Chrome, Safari, Edge, Firefox, Internet Explorer and Opera on its home page, and can correctly identify your browser from within that list. If you access Update my Browser from a browser not on the list, like Vivaldi, the site will see you as using one of the browsers it knows—in Vivaldi’s case, it comes up as Chrome. Update my Browser isn’t a perfect site, then, but since a majority of people use one of the six browsers listed, it’s still incredibly useful for its second feature. Once it identifies which browser you are using, Update my Browser also reads which version of the browser you are running. Through a simple bit of code, the site sees which update number you are on, and simultaneously sees from your browser’s developers if there’s a more recent version you can download. And it makes it easy for you to update your browser right then and there by giving you an “Update now” button to get the latest version. Once it takes you to a developer’s website, just follow the directions for the download on that page, and then launch the installer from your downloads folder, or wherever you saved the download. After that, your browser will be up to date. Why you should update your web browser But of course, convenient as Update my Browser makes it, you might still be wondering why it’s important to update your web browser. It comes down to two main reasons: updating your browser improves your Internet speed, and it makes being on the Internet a lot safer. We’ve talked before about how browser extensions can speed up your connection, download and upload times. Updating your browser does that too. Browser developers aim to improve performance with every update they provide so their browser runs more efficiently, and with more speed. This also means updates can result in better functionality, and new or better controls for your web browsing experience. So updating gets your browser to run faster, and to work better too. One of the major parts that works better when you update your browser is the browser’s security functions. Updating to the latest version of your web browser guarantees you have all known openings for hackers closed and walled off, as well as known bugs addressed and repaired. There’s a lot you can personally do to protect yourself online regardless of your browser version, but updating your browser can keep you that much safer. Any advancements in coding or tech can get implemented in a browser update, after all, and when your computer is protected, you’re protected too. A really important safety consideration is that the older your browser version is, the less likely the developers are actively keeping it safe. They’re busy making sure their latest browser updates are functioning normally, and are keeping users relatively safe as they surf the web. Your older browser won’t be left behind right away, but the longer you wait to update, the more at risk you become. Updatemybrowser.org is also an open source site, so you can easily check out the code that makes this functionality possible, and even include it on your own website, so other people can see if they need to update.
<urn:uuid:65724a3e-1ad8-4ee8-8e80-d98eb8004fe1>
CC-MAIN-2022-40
https://www.komando.com/lifestyle-reviews/find-out-if-your-browser-is-up-to-date-and-how-to-update-it/552121/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00098.warc.gz
en
0.924181
883
2.546875
3
Google Demonstrates Sycamore Quantum Computer Can Detect & Fix Computational Errors (NewScientist) Google has shown that its Sycamore quantum computer can detect and fix computational errors, an essential step for large-scale quantum computing, but its current system generates more errors than it solves. Error-correction is a standard feature for ordinary, or classical, computers, which store data using bits with two possible states: 0 and 1. Transmitting data with extra “parity bits” that warn if a 0 has flipped to 1, or vice versa, means such errors can be found and fixed. Julian Kelly at Google AI Quantum and his colleagues have demonstrated the concept on Google’s Sycamore quantum computer, with logical qubits ranging in size from five to 21 physical qubits, and found that logical qubit error rates dropped exponentially for each additional physical qubit. The team was able to make careful measurements of the extra qubits that didn’t collapse their state but, when taken collectively, still gave enough information to deduce whether errors had occurred. Kelly says that this means it is possible to create practical, reliable quantum computers in the future. “This is basically our first half step along the path to demonstrate that,” he says. “A viable way of getting to really large-scale, error-tolerant computers. It’s sort of a look ahead for the devices that we want to make in the future.” Peter Knight at Imperial College London says Google’s research is progress towards something essential for future quantum computers. “If we couldn’t do this we’re not gonna have a large scale machine,” he says. “I applaud the fact they’ve done it, simply because without this, without this advance, you will still have uncertainty about whether the roadmap towards fault tolerance was feasible. They removed those doubts.”
<urn:uuid:a3b02cc6-f35d-4069-bf97-9573ba15a0e3>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/google-demonstrates-sycamore-quantum-computer-can-detect-fix-computational-errors/?noamp=mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00298.warc.gz
en
0.933693
401
3.0625
3
The internet of things (IoT) is an environment in which “things” – objects, animals or people – are given unique identifiers on the internet and are able to transfer data over a network without the need for human-to-human or human-to-computer interaction. The IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS) and the internet. These are the key components of the IoT: - Data Collection: At the core of the IoT are sensors and actuators that collect, transmit, store and act on data at the source. These devices range in size and capability. Some have minimal operating systems (OS). Others have robust embedded OS, including Microsoft Windows and Google Android. - Connectivity: The IoT cannot exist without the interconnection of devices and sensors. Bluetooth, near-field communication (NFC), wi-fi and cellular are familiar technologies for enabling connectivity. On the horizon is NB-IoT, a narrowband IoT protocol based on current cellular technology. It will support quality of service (QoS), as well as the critical success factor for any IoT implementation: a low-power wide area network (WAN). NB-IoT will also offer security – something that many platforms and protocols for connectivity lack. - People and processes: As the number of connected devices grows, so, too, will the need for new methods of managing, interpreting and acting on the massive volumes of data being generated and collected by those devices. The type and amount of data being collected holds potentially powerful insights. The value proposition behind the IoT is based on the idea that action will be taken based on this data. In some cases, the action may be immediate; in others, data may accumulate over time to provide trending, metrics across populations or predictive analytics. This is where people, process and risk management come into play. Process must be designed to ensure data-driven actions are well-thought-out, consistent, and aligned with strategic objectives and risk management protocols. The real promise of the IoT lies in this third component. The integration of people and processes in the IoT is required to help the internet of everything, or IoE, evolve. The IoT is evolving rapidly, with a wide array of “smart” systems, mobile apps, personal communication devices and other platforms already networked together. Research firm IDC projects that there will be 30 billion connected things by 2020. And to paraphrase Forbes in defining the IoT, if something can be connected to the internet, its only a matter of time before it will be. But the IoT isn’t just about connecting and gathering data from things like wireless smart devices and systems. The IoT is a critical technology transition that is essential to the development of a much bigger and deeply interconnected network and to advancing and supporting digital business. In an increasingly digital world, senior executives and boards of directors need to be keen observers of all technological changes that could potentially impact the business and its risk profile. The IoT is exactly that type of disruptive change. Management and boards therefore must understand how to recognize the signs of IoT change and any related implications to the business model or strategic objectives of the organization. As the IoT expands and the world becomes more interconnected – and devices in the IoT collect more and richer data from objects, machines and people – organizations across industries will face new opportunities and risks. Privacy issues, hacking and other cybercrime, and the potential for catastrophic business failure due to heavy reliance on the internet are examples of risks that businesses will need to monitor closely in the IoT landscape. What Opportunities Does the IoT Present for Businesses? In addition to understanding key IoT-related risks, management and boards must recognize the opportunities the IoT presents to the business, remembering that failure to take advantage of the IoT opportunity is a risk in and of itself. These opportunities may be unexpected, and previously unimagined. The IoT can bring positive disruption and innovation to even traditional, non-digital industries. Here are some examples of how IoT has been applied to various industries: - Consumer technology: Amazon Dash, the Wi-Fi-connected device that lets users reorder their favorite products through Amazon with the press of a button, was not only adopted literally overnight, but was also soon hacked by users to enable it to do other things, such as order a pizza or call an Uber. - Electricity and utilities: Smart grid technology enables distribution intelligence and provides a two-way opportunity to send electricity back to the grid during peak usage periods. - Oil and gas: By becoming “digital technology companies,” oil and gas companies can further improve rig uptime and oil recovery rates, reduce oil spillage, boost employee productivity, shrink costs, and more. - Insurance: Environmental sensors are being used to detect temperature, smoke, toxic fumes, mold, earthquake motion and more in workplaces and other buildings and facilities. - Automotive: Autonomous cars can help reduce traffic and increase road safety. Road sensors can alert drivers of sensor-equipped cars to rain, frost and ice. Some road sensors can also measure the thickness of ice, analyze the makeup of chemicals on the road used for deicing, and then report to departments of transportation so they can improve their application of those chemicals. - Medical: Patient care is an obvious application for IoT technologies (including appointment scheduling and monitoring conditions). Medical device downtime can also be reduced through remote monitoring and support. IoT technology helps hospitals optimize the supply chain and reduce risk as well. The Risks of the IoT Considering the potential opportunities the IoT presents, perhaps the most significant IoT risk for businesses is not moving fast enough (or at all) to develop and leverage new IoT technologies and applications. Nevertheless, to succeed in the IoT world, organizations must also be aware of and closely monitor their risk exposure in areas such as privacy, interruption of service and disrupted denial of service attacks. The Open Web Application Security Project (OWASP) helps manufacturers, developers and consumers better understand IoT security issues so that they can make better security decisions when building, deploying or assessing IoT technology. Here is OWASP’s list of the top 10 IoT risks, which organizations can use to assess their specific IoT risks: - Insecure web interface - Insufficient authentication/authorization - Insecure network services - Lack of transport encryption/integrity verification - Privacy concerns - Insecure cloud interface - Insecure mobile interface - Insufficient security configurability - Insecure software/firmware - Poor physical security Read more about the internet of things in our whitepaper, The Internet of Things: What Is It and Why Should You Care? You may also find the following links useful:
<urn:uuid:02d96c6b-5111-4245-92b6-7e837d01300d>
CC-MAIN-2022-40
https://info.knowledgeleader.com/what-is-the-internet-of-things
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00298.warc.gz
en
0.936003
1,383
3.46875
3
Women in low-and-middle income countries (LMICs) are 8 per cent less likely than men to own a mobile phone and 20 per cent less likely to use mobile internet, or own a smartphone. While this mobile gender gap is well documented, new econometric analysis by GSMA Intelligence and GSMA Connected Women finds women are less likely than men to own a mobile phone, use mobile internet, or own a smartphone even when other relevant socio-economic and demographic factors are controlled for. As a result, women are prevented from accessing essential services for health, education and finance, especially during the COVID-19 pandemic . To address this issue, it is essential to understand what the underlying drivers of the gap are. Are women less likely to use mobile because of broader gender inequalities in literacy, education, income or employment, or are there other factors at play? To answer these questions, we have carried out new quantitative analysis (available on request) to better understand the key drivers of mobile ownership, mobile internet use and smartphone ownership, using three years of data (spanning 2017 to 2019) across 31 LMICs from GSMA Intelligence’s face-to-face consumer surveys. The results show how key demographics influence uptake of mobile handsets, mobile internet and smartphones, building on previous research by various organisations on how different factors impact the mobile gender gap. Our findings show that: Individuals in rural areas are less likely to own a mobile phone than urban populations, and the effect is even greater for mobile internet and smartphone adoption. Those with lower incomes and those not working are less likely to own a mobile, use mobile internet, and own a smartphone. Individuals that have only completed primary education are less likely to own a mobile, use mobile internet and own a smartphone than those with a degree or above, and those that have completed secondary education are more likely than those with only primary education, but less likely than those with a degree. Similarly, those with low levels of literacy are less likely to access these three types of mobile technology than those with good literacy skills. This analysis also finds that the probability of the three types of mobile technology adoption generally declines with age. Women are less likely than men to own a mobile phone, use mobile internet or own a smartphone, even when other relevant socioeconomic and demographic factors are controlled for Due to broader gender inequalities in literacy, education, income or employment, the above results show women are less likely to adopt and use mobile technology than men. However, addressing these inequalities will not close the gender gap completely. Even if women in LMICs had the same levels of education, income, literacy and employment than men, our analysis finds there would still be a gap in the adoption and use of mobile technology. In other words, this additional negative effect can be solely attributed to gender. Even when all these other relevant socio-economic and demographic factors are controlled for, women in LMICs are 5 percentage points less likely than men to own a mobile phone; 6 percentage points less likely to use mobile internet; and 4 percentage points less likely to own a smartphone. This gender effect could be attributed to mechanisms which are hard to measure such as discrimination and social norms. We found this gender effect is worse for women living in rural areas, those who are unemployed, and those with lower levels of literacy. By region, this gender effect was particularly strong in Africa and Asia, whereas in Latin America women are just as likely as men to own a mobile, use mobile internet or own a smartphone once other relevant factors are controlled for. This suggests certain types of women face different degrees of the gender effect which prevents them from becoming digitally included. The results of this study have important implications. They suggest even if broader gender inequalities in socio-economic outcomes are addressed, such as equalising access to income and education among men and women (which in itself is unlikely to occur in the short-term), there is still likely to be a persistent mobile gender gap. These less visible drivers, which could for example relate to discrimination or social norms in certain countries and which certain segments of women appear to experience more acutely, need to be better understood and addressed if the gender gap is to ever be closed. Caroline Butler – economist, GSMA Intelligence and Matt Shanahan – insights analyst, GSMA Connected Women The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
<urn:uuid:eb945e07-a8dc-40e3-81d9-0fa4dbd030db>
CC-MAIN-2022-40
https://www.gsmaintelligence.com/2020/08/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00298.warc.gz
en
0.945926
998
2.78125
3
So which is it: Cyber Security, Cyber-Security or Cybersecurity? Is this the next reincarnation of datacentre vs. data center or ciphertext vs. cipher text? The security industry still hasn’t made any concerted effort to close on the cyber?security anomaly. And with about 15 million search results each, not even Google is able to raise a leg from either side of the fence. When spending any length of time researching security, one quickly becomes aware of far too many polarized views within the technical community. These may be relatively trivial opinions, such as: whether Mac or PC is more secure; all the way up to endorsing (or not) a sophisticated anti-DDoS solution. It’s frustrating to say the least, and borderline embarrassing when the output of one’s labour is supposed to be a bulletproof recommendation, an architectural blueprint or a multi-year security roadmap. About the only thing the industry agrees upon is that cyber should go before security. Many of my colleagues are divided along traditional lines, the Atlantic. American English seems to prefer Cybersecurity while British English (and European colleagues) cling to Cyber Security. I think Google threw in the towel a long time ago because unless specifically told not to, it fudges most of its search results and includes all commonly used variants. Grammarians may argue, but the Associated Press (@APStylebook), which for all intents and purposes still holds the throne when it comes to news copy style, says it is one word — Cybersecurity: “cyber-, cyberspace is a term popularized by William Gibson in the novel “Neuromancer” to refer to the digital world of computer networks. It has spawned numerous words with cyber- prefixes, but try to avoid most of these coinages. When the combining form is used, follow the general rule for prefixes and do not use a hyphen: cyberattack, cyberbullying, cybercafe, cybersecurity.” There are some exceptions to the prefix rule, specifically around proper nouns, such as ‘˜US Cyber Command.’ But for the most part, if you are sticking with the leader when it comes to defining news style, you will want to stick with the single word use. Gartner acknowledged that there is confusion in the market over how the term should be used, which prompted the firm to publish Definition: Cybersecurity In it, analysts Andrew Walls, Earl Perkins and Juergen Weiss wrote that: “Use of the term ‘˜cybersecurity’ as a synonym for information security or IT security confuses customers and security practitioners, and obscures critical differences between these disciplines. To help set the record straight, the team defined the term: “Cybersecurity encompasses a broad range of practices, tools and concepts related closely to those of information and operational technology security. Cybersecurity is distinctive in its inclusion of the offensive use of information technology to attack adversaries.” Additionally, Gartner advised: “Security leaders should use the term “cybersecurity” to designate only security practices related to the combination of offensive and defensive actions involving or relying upon information technology and/or operational technology environments and systems.” Hoping to shed some light on this problem and advocate a single solution, one blogger polled some of the top experts on neologisms, the creation of new words, to see whether there was consensus on where cyber is heading. One response that particularly resounded with me was that by Suzanne Kemmer, Associate Professor of Linguistics at Rice University. She said: From the standpoint of the usual lexical conventions, cybersecurity is better, because ‘cyber’ is not a free-standing word but instead what linguists call a bound morpheme – a combining form used to form new words. It is of Classical Greek origin like many of our scientific and technical vocabulary elements– and the usual pattern for such borrowings is to combine them with other elements into one word. Bio, neo, photo are all parallel examples – when made into new compounds they are written together with the element following: not bio informatics but bioinformatics, etc. Sometimes a group of specialists will make their own convention, but the language at large typically doesn’t follow it because there are so many instances of the more general pattern. It looks like that has happened in the technical community in this case They probably don’t know the general lexical patterns of English and just have made their own specialists’ convention I predict that for this word the general (one-word) pattern will win out in the language at large. Whatever your decision may be, stick to it and be consistent throughout your writing. And if you’re tempted to throw every conceivable variant into a body of work with the aim of improving your search ranking, please don’t. Since we already know that the likes of Google treat these variants equally, using them will only confuse the reader. And so, while most of us mere mortals prepare for a long winter of similarly pointless deliberation, at least I’m no longer in two minds about how to write cybersecurity. I’ve decided to go with the single word because, as Susan put it, cybersecurity will win out in the end. Having said this, I also like the look of CyberSecurity — with a capitalized S, easier on the eye I think.
<urn:uuid:ee35b8d1-19db-404a-9471-ea1c1613eb5d>
CC-MAIN-2022-40
https://drshem.com/2015/10/12/cyberconfusion-cyber-security-cyber-security-or-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00298.warc.gz
en
0.941484
1,138
2.671875
3
The following is a guest post from i2Coalition Board of Director’s member and consultant to member Afilias, Melinda Clem. In Washington, DC this week, information policy leaders shared their respective thoughts around Internet fragmentation, a move from a universally connected Internet to one divided into disparate networks via regulations, technology or unintended consequences. This was not a theoretical discussion, but one grounded in the realities of connectivity issues, security, logical barriers and the voluntary agreements that keep the Internet working. A discussion like this can proceed in a variety of directions, many of which can focus on the negatives: describing grave threats, over-generalizations about security, or blaming the dissenting voices. Tuesday’s panel was able to talk in measured terms about threats and highlight various efforts to protect against fragmentation. A few concepts in particular are worth highlighting. We should remember the goal: global connectivity and participation. The clearest example of fragmentation of the Internet was articulated by Kathryn Brown, Internet Society CEO: only half the world is on the Internet. This access barrier poses the clearest dislocation threat – when 4 billion people are physically disconnected, we don’t have a comprehensive, shared universal system. Initiatives by the Internet Society and the State Departments Global Connect Program aim to facilitate the requisite infrastructure development and education to bring the entire world, regardless of location, online. Global participation is the second factor in remedying physical fragmentation of the Internet. It is critical that the constituents of the Internet – users, naming and addressing operators, policy makers, technical standards development, governments and communication service providers – have a voice. This global, collaborative process, known as the multistakeholder approach, is the mechanism for sharing information, and expanding reach and access to the Internet in a responsible manner. This system is globally endorsed by the United Nations, but it is voluntary. As participants, it is our responsibility to engage in an active, constructive manner, bring other actors into this open governance model, and strive to increase physical connectivity. Responsible participation will reinforce the multistakeholder model by positive example. An open Internet does not mean all information is accessible. An important clarification was made by Ambassador Daniel Sepulveda: privacy is not the enemy of the open Internet. While this may seem obvious at the outset, if we think about the broader issues around espionage, data localization and balancing the preservation of identity, social custom and anonymity, we start to see the challenges. It is important to remember, as Brown said, “the Internet is both local and global.” When defining policy, it is important to note that policy is not a social prescription on how to use Internet resources, but is generally focused on providing secure connectivity. We need to educate policymakers on what Jeremy West of OECD referred to as the “unintended consequences” of restricting access by showing the numerous educational resources available online and how limiting access can close opportunities of social growth. As we develop new policies, business models and innovative technologies, it is important to prioritize local control of data. We must provide means for users to use technologies, educate themselves, and gather and share information while protecting their anonymity as desired. This user control is not accomplished via discreet and proprietary networks with closed protocols; it is better maximized building end user controls into open protocols and standards. Fragmentation of the Internet is a threat, and it won’t be solved in a 90 minute discussion, or in the next 90 days. But if we are all responsible actors and do our part to increase connectivity and participation, and if we innovate using open standards, we will realize the full goals of a globally connected and open Internet.
<urn:uuid:dce18805-94fe-4cdd-8522-2a04e120264d>
CC-MAIN-2022-40
https://i2coalition.com/fighting-internet-fragmentation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00298.warc.gz
en
0.934102
752
2.53125
3
What is a Concept Model?A concept model organizes the business vocabulary needed to communicate consistently and thoroughly about the know-how of a problem domain. A concept model starts with a glossary of business terms and definitions. It puts a very high premium on high-quality, design-independent definitions, free of data or implementation biases. It also emphasizes rich vocabulary. A concept model is always about identifying the correct choice of terms to use in communications, including statements of business rules and requirements, especially where high precision and subtle distinctions need to be made. The core concepts of a business problem domain are typically quite stable over time. Concept models are can be especially effective where: - The organization seeks to organize, retain, build on, manage, and communicate core knowledge or know-how. - The project or initiative needs to capture 100s or 1,000s of business rules. - There significant push-back from business stakeholders about the perceived technical nature of data models, class diagrams, or data element nomenclature and definition. - Outside-the-box solutions are sought when reengineering business processes or other aspects of business capability. - The organization faces regulatory or compliance challenges. |Definition of Concept Model a model that develops the meaning of core concepts for a problem domain, defines their collective structure, and specifies the appropriate vocabulary needed to communicate about it consistently - For BACCM these basic noun concepts are: need, stakeholder, value, change, context, and solution. - In finance, basic noun concepts might include financial institution, real-estate property, party, mortgage application, lien, asset, loan, etc. - In BACCM some basic wordings for verb concepts include: Value is measured relative to Context, Change is made to implement Solution, Stakeholder has Need. - In a financial business, some basic wordings for verb concepts include: Lien is held against Real Estate Property, Party requests Loan, Asset is included in Mortgage Application. - Categorizations – e.g., Person and Organization are two categories of Party. - Classifications – e.g., ‘Toronto Dominion Bank’ is an instance of Financial Institution. - Partitive (Whole-Part) Connections – e.g., Dwelling and Land are two Parts of a Real Estate Property. - Roles – e.g., Applicant is the role that Party plays in the verb concept Party requests Loan. - Provides a business-friendly way to communicate with stakeholders about precise meanings and subtle distinctions. - Is independent of data design biases and the often limited business vocabulary coverage of data models. - Proves highly useful for white-collar, knowledge-rich, decision-laden business processes. - Helps ensure that large numbers of business rules and complex decision tables are free of ambiguity and fit together cohesively. - May set expectations too high about how much integration based on business semantics can be achieved on relatively short notice. - Requires a specialized skill set based on the ability to think abstractly and non-procedurally about know-how and knowledge. - Involves a knowledge-and-rule focus that may be foreign to stakeholders. - Requires tooling to actively support real-time use of standard business terminology in writing business rules, requirements, and other forms on business communication.
<urn:uuid:57f8e74f-acb1-4f00-86d2-47d69d663f68>
CC-MAIN-2022-40
https://www.brsolutions.com/what-is-a-concept-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00298.warc.gz
en
0.908426
692
2.828125
3
Industrial Internet of Things (IIoT) What is the industrial Internet of Things (IIOT)? The IIoT consists of internet-connected machinery and the advanced analytics platforms that process the data they produce. IIoT devices range from tiny environmental sensors to complex industrial robots. While the word “industrial” may call to mind warehouses, shipyards, and factory floors, IIoT technologies hold a lot of promise for a diverse range of industries, including agriculture, healthcare, financial services, retail, and advertising. What's the difference between IOT and IIOT? The Industrial Internet of Things is a subcategory of the Internet of Things, which also includes consumer-facing applications such as wearable devices, smart home technology, and self-driving cars. Sensor-embedded devices, machines, and infrastructure that transmit data via the Internet and are managed by software are the hallmark of both concepts. Why Industrial IOT? For any business that deals with the production and/or transportation of physical goods, IIoT can create game-changing operational efficiencies and present entirely new business models. The following are examples of ways in which IIoT technology could be applied in diverse industries. - Production – This is the industry in which most IIoT technology is currently being implemented. IIoT-enabled machines can self-monitor and predict potential problems, meaning less downtime and greater overall efficiency. - Supply chain – With sensor-managed inventory, IIoT technology could take care of ordering supplies just before they go out of stock. This decreases the amount of waste produced while keeping necessary goods in stock and frees up employees to focus on other tasks. - Building management – IIoT technology could make building management simpler and more secure. With sensor-driven climate control, the guesswork and frustration involved in manually changing a building’s climate will be eliminated. Additionally, devices that monitor entry points in the building and respond to potential threats quickly will increase the building’s security. - Healthcare – With devices that monitor patients remotely and notify healthcare providers as soon as patients’ statuses change, IIoT could cause healthcare to become more precise and responsive. Eventually, AI may even be able to take over patients’ diagnoses, meaning doctors are able to treat them sooner and more effectively. - Retail – IIoT technology has the potential to make quick, intelligent marketing decisions for individual stores. With storefronts that automatically update based on consumer interest and the ability to put together smart promotions, retail outlets that implement IIoT technology could gain a significant advantage over their competitors. IIOT technologies and concept How are businesses taking advantage of the IIoT right now? Here are a few examples of current and upcoming IIoT technologies and concepts: - Digital twins – The practice of creating a computer model of an object such as a machine or a human organ or a process like weather. By studying the behavior of the twin, it is possible to understand and predict the behavior of the real-world counterpart and address problems before they occur. - Electronic logging device (ELD) – Onboard sensors that monitor speed, driving time, and how often individual drivers use their brakes, helping to conserve fuel, improve driver safety, and reduce idle resources. If the driver makes a dangerous maneuver or is at the wheel for too long, the driver is alerted and the dispatcher is notified. This technology can replace the paper logs that drivers were once required to fill out every day. - Intelligent Edge – The place at which data is generated, analyzed, interpreted, and addressed. Using the intelligent edge means analysis can be conducted more quickly and the likelihood that the data will be intercepted or otherwise breached is significantly decreased. - Predictive maintenance – A system that involves a machine or component with sensors that collect and transmit data and then analyze that data and store it in a database. This database then provides points of comparison for events as they occur. The system eliminates unnecessary maintenance and increases the likelihood of avoiding failure. - Radio-frequency identification (RFID) – A system that involves tags and readers, like a smarter version of barcode technology. Readers identify RFID tags using radio waves, meaning the tags can be read by multiple readers at once and over a longer distance than traditional UPCs. RFID tags make is possible to easily track and monitor the things on which they are attached. HPE Industrial IOT Products and Services HPE's hybrid cloud and edge computing solutions combine to create powerful IIoT solutions. Process data on site, in real time, and let data flow seamlessly between intelligent devices and your private and public clouds. Edgeline IoT servers: Explore servers designed for edge computing and the Industrial Internet of Things. Mobile and IoT solutions: Collect and analyze data, then take action on insights, at the Intelligent Edge. Hybrid cloud solutions: Power your IIoT with the right hybrid cloud solutions.
<urn:uuid:6662f53c-e965-4022-aec3-6eb6cd8ac707>
CC-MAIN-2022-40
https://www.hpe.com/ae/ar/what-is/industrial-iot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00298.warc.gz
en
0.926513
1,011
3.140625
3
Some drones are toy airplanes, others are smart flying machines that can perform tasks for people who control them from the ground. As a subset of unmanned aerial vehicles (UAVs), drones can take video, photos and measurements, as well as deliver items from one location to another. Modern drones are integrated with the internet so that the data they collect can be shared in real-time with remote users, and this concept is called ‘internet of drones’. Understanding the Internet of Drones The internet of drones (IoD) combines drones and the internet to empower users in multiple ways. Basically, it means IoT sensors are starting to populate low-altitude airspace. In that sense, IoD is simply IoT in the sky. Drones make it possible for sensor omnipresence to blanket the planet's atmosphere, creating a highly interconnected global village. Applications of Drones in Various Industries Commercial drones are now used by retailers to deliver products faster to consumers. At the start of the 2020s, the fastest drones were able to exceed 160 miles per hour but the legal limit set by the Federal Aviation Administration (FAA) is 100 mph. The average drone can travel at 45 mph. Here are ways drones can improve different industries: - Smart agriculture - Smart drones are perfect tools for farmers to get aerial views of their crops to monitor growth. Using GPS, they can detect specific areas that need more attention. They can take measurements important to agriculture, such as temperature, humidity, sunlight and wind. Farmers are concerned about conserving and maximizing natural resources, but with drones, they will be able to identify and reduce waste. Drones can also be used for agricultural land mapping and spraying agricultural chemicals on crops. - Mining - One of the best ways drones can help the mining industry is by improving the safety of mining operations. Mining is a dangerous job due to the explosives involved. Drone cameras can help map out safety zones for crew members. Since mines typically take up vast amounts of land, drones can help monitor site conditions to avoid manual inspections that require labor, additional costs, and much more time. - Construction - A contractor can use a drone to get aerial photos of construction projects to help make the site safer and more efficient. Drones can help crew members detect installation vulnerabilities and complications involved with building layout and roofing. As with mining and other dangerous works, the construction industry can improve with drones facilitating group coordination. Drones can also help locate and limit construction waste. - Emergency and Delivery Services - The odds of saving lives in emergency situations increase with drones. They can help find a lost or missing child and victims in remote locations. In order for drones to perform this function, they require high-quality communication capabilities and enhanced sky connectivity. Drones can transport disaster relief in the form of food and medicine to victims. It's actually possible for drones to help rescue victims underwater. They certainly have many more public safety uses following an earthquake, hurricane or tornado. - Films and TV - Drones are taking cinematography to new heights. They make shooting films and videos from the sky much easier and safer than using a helicopter or a crane. For film producers trying to cut labor costs, smart drones provide precision positioning and moving the camera to get desired elements within the picture frame. Drones have already been used for TV news programs, especially in live pursuits of police chasing suspects. Ultimately, drones allow for automated filmmaking integrated with machine learning software to get the best camera angles and trajectories. Mechanical Design Requirements - Drone parts - The drone's body connects with one, two, three or four propellers for lifting and vertical motion. Drones are typically powered by a motor that draws energy from an intelligent lithium-polymer (LiPo) battery. Drones can also be powered by solar cells, hydro fuel cells and laser beams. The legs of a drone are typically used for antennas, the compass, GPS, and other sensors are embedded in the body, while the camera is typically mounted on a camera platform attached to the body. - Remote controller - Users control the drone with a remote controller that includes joysticks for direction, similar to a video game. The remote controller may operate from a centralized base station. - Flight controller - This tiny robot serves as an automated pilot that helps the drone achieve stability in difficult situations. The flight controller gets various signals from sensors. - MEMS sensors - Micro-electro-mechanical (MEMS) sensors improve flight performance and allow the drone to be controlled accurately by users. The Inertial Measurement Unit (IMU) is the main sensor, as it measures acceleration and rotation of the drone. A drone sensor is typically the size of an ant and can be used to measure barometric pressure and other metrics. - Accessories - Drones can be equipped with cameras for aerial views, integrated with AI and automation technology. A GPS module allows for satellite communication and determining geolocations. Watch the recording of our webinar “Connected Skies” to hear about drone laws, connected drones, drone delivery, and how drones are leveraging the power of mobile and satellite connectivity, and more. IoD Infrastructure Requirements In order for IoD to be effective on a mass scale for businesses and personal use, its infrastructure must be omnipresent, secure and flexible. At the core of the system should be smart technology, in which real-time data can be made available on demand. Ideally, the infrastructure allows for easy integration with new technology. Cybersecurity should be a top priority for drone owners, as special authentication and key exchange protocols must generate a symmetric security key. Yes, drones can be hacked, much like any form of electronic communication. Remote hijacking with malware is even worse, so developing strong cybersecurity layers cannot be understated or overlooked. Another IoD requirement is seamless coverage across suburban, urban and rural areas. At the moment, a good percentage of the earth is still not connected to the internet. But any area with at least 4G+ connectivity is sufficient for drone-to-base data sharing. As far as vertical coverage, drones get as high as 30,000 feet, but typically fly between 200 and 400 feet above the ground. At some point, the drone industry can expect to face complaints about drones equipped with cameras and recorders that create privacy concerns. The smallest drones can fit in a person's hand and yet can house smart technology and cameras that take hours of video footage. Others may fear some drones are bound to drop from the sky and injure people, despite the advanced technology. Lost drones, though, can be found by the user through GPS. If skies become saturated with drones, there may be complaints about endangering birds and interfering with natural scenery. Law enforcement agencies and firefighters are exploring the possibilities of drones and have applied for drone permits with the FAA. If they start deploying hundreds of surveillance drones over a neighborhood to take photos, it could create heated local controversies and the rise of anti-drone laws. It may be true that criminals will have nowhere to hide in a drone-intensive environment, but it raises questions about the privacy of law-abiding citizens. The FAA controls which drones are allowed to fly and where. But it doesn't control cybercriminals who are always experimenting with new ways to abuse the internet. Perhaps the worst things that can happen with drones are that they can be hacked and used to terrorize innocent people or misdirected and then stolen by cybercriminals. Many of these drone challenges are already being addressed by telecom innovators such as Ericsson, which is in the process of conducting advanced experimental drone research. The future of sky-based internet will be the advent of 6G wireless networks, but for now, internet of drones pioneers are working on 5G enhancements.Drones are set to play an important role in society as the new "carrier pigeons" of the digital age. In some ways, they can help take the strain off supply chain issues with "alternative transportation."
<urn:uuid:c2b294ec-5611-4683-92e9-fe6602fae5f1>
CC-MAIN-2022-40
https://iotmktg.com/what-is-internet-of-drones-iod/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00498.warc.gz
en
0.927768
1,641
3.484375
3
Art Villanueva – G2 Ops System Engineering Solution Director Keywords – architecture, technology, software, hardware, systems, enterprise, engineer, solution, systems architecture Estimated Reading Time: 4 minutes What does System Architecture mean in the world of technology and non-shelter engineering? The word architecture invokes reflections of structure. And often, it elicits feelings of awe and grandeur. After all, when we talk about architecture, we often refer to buildings – the beauty of St. Peter’s Basilica, the massiveness and simplicity of the pyramids of Giza, and the alluring gaudiness of the Sagrada Familia. But when we talk about systems architecture, we’re most likely not referring to brick and mortar. Systems architecture is the embodiment of a system solution that takes into consideration functional needs (what does it do?), non-functional needs (how would you describe it?), and most specifically, its required quality attributes. What is Architecture? Architecture is design (though not necessarily the other way around), whether it be of software, hardware, systems, or enterprises. Architecture defines the highest-level solution to a problem. System architecture is the collective structure and behavior of a system and its relationship with its environment. What an architecture is not, is a model. A model is simply a representation of an architecture akin to what a blueprint is to the architecture of an office building. But if systems architecture embodies the systems solution, it must also focus on such things as usability, reliability, security, affordability, and so on (sometimes called “ilities”). Because of this, when systems engineers refer to architecturally-significant requirements, it is these quality attribute requirements that matter the most. Because while problems often have many solutions, an architect realizes that there are only a handful of ways to satisfy the most important quality attribute requirements. So, what does this mean in the practical sense? Architecture for the Real World Take for example a need in which the goal is to get from point A to point B multiple times a week. Let’s assume the distance between the two points is anywhere between one and 300 miles of flat land. Functionally, many solutions exist, from a bicycle, a helicopter, a hot air balloon, a bus, and even a pair of shoes – you name it! A good architect, however, considers the non-functional requirements of this transportation device as the driving force in the design of a solution. Does the customer require efficiency as the primary driving requirement? Is it affordability? Reliability? Coolness factor? If money is no object (wouldn’t that be nice) – i.e. affordability is absolutely not an issue – and other circumstances (such as laws) allow it, perhaps a quadcopter-like machine will suffice. If the customer has $100 to spend and getting to the destination quickly is not an issue, perhaps a bicycle-like solution for this transportation device will do. Though this example is certainly contrived, it illustrates obvious architectural selections. In reality, these architectural decisions are not so clear-cut, and the architect is forced to weigh multiple quality attribute requirements against each other. Some or all may be equally important, and often diametrically opposed (e.g., how much more important is usability over security, or affordability over reliability?). Compromises are often necessary, and drive architectural solutions to be chosen among, say, an n-tier, a SOA, or a peer-to-peer solution. System Architect/Engineer collaboration Ultimately, because of many factors, the architecture of a system is defined not just by the architect, but equally as much by the implementing engineer. The relationship between an architect and an engineer is a push-and-pull, where the architect describes the vision of the solution, and the engineer provides its implementation. If the engineer cannot implement a particular architecture, the architect may be forced to reconsider other solutions or determine that, with consultation with the client, certain tradeoffs are now acceptable. It’s important to pay attention to non-functional requirements over functional ones. Architects, designers, engineers, and anyone that designs systems needs to know that functional requirements are not sufficient in defining a system, much less a complex one. As for the transportation example, if I were the customer, and I need availability of fuel as well as reliability and sustainability of the device itself. I think a solution that’s an electric car will be just fine. Learn more about MBSE, Cybersecurity, and Cloud Engineering at www.G2-ops.com
<urn:uuid:7f87635a-6b7f-4c0a-8ed6-06ae5bb0728e>
CC-MAIN-2022-40
https://g2-ops.com/blog/what-you-need-to-know-about-systems-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00498.warc.gz
en
0.953373
958
2.875
3
Imbalance data distribution is an important part of machine learning workflow. Imbalance data distribution is an important part of machine learning workflow. An imbalanced dataset means instances of one of the two classes is higher than the other, in another way, the number of observations is not the same for all the classes in a classification dataset. This problem is faced not only in the binary class data but also in the multi-class data. In this article, we list some important techniques that will help you to deal with your imbalanced data . This technique is used to modify the unequal data classes to create balanced datasets. When the quantity of data is insufficient, the oversampling method tries to balance by incrementing the size of rare samples. A primary technique used in oversampling is SMOTE (Synthetic Minority Over-sampling TEchnique). In this technique, the minority class is over-sampled by producing synthetic examples rather than by over-sampling with replacement and for each minority class observation, it calculates the k nearest neighbours (k-NN). But this technique is limited to an assumption that local space between any two positive instances belongs to the minority class, which may not always true in the case when the training data is not linearly separable. Depending upon the amount of oversampling required, neighbours from k-NN are randomly chosen. No loss of information Unlike oversampling, this technique balances the imbalance dataset by reducing the size of the class which is in abundance. There are various methods for classification problems such as cluster centroids and Tomek links. The cluster centroid methods replace the cluster of samples by the cluster centroid of a K-means algorithm and the Tomek link method removes unwanted overlap between classes until all minimally distanced nearest neighbours are of the same class. Run-time can be improved by decreasing the amount of training dataset. Helps in solving the memory problems 3| Cost-Sensitive Learning Technique The Cost-Sensitive Learning (CSL) takes the misclassification costs into consideration by minimising the total cost. The goal of this technique is mainly to pursue a high accuracy of classifying examples into a set of known classes. It is playing as one of the important roles in the machine learning algorithms including the real-world data mining applications. In this technique, the costs of false positive(FP), false negative (FN), true positive (TP), and true negative (TN) can be represented in a cost matrix as shown below where C(i,j) represents the misclassification cost of classifying an instance and also “i” the predicted class and “j” is the actual class. Here is an example of cost matrix for binary classification.[…]
<urn:uuid:cc0d45e1-0095-4178-9c89-d2d3d434f5b7>
CC-MAIN-2022-40
https://swisscognitive.ch/2019/02/17/5-important-techniques-to-process-imbalanced-data-in-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00498.warc.gz
en
0.91597
581
3.09375
3
Artificial intelligence is machine learning is deep learning, right? Wrong. Instead, think of them as layers of an onion: artificial intelligence is the large, shifting outer peel that can encompass a variety of technologies, applications, and philosophies. Of AI technologies, machine learning is one tool – a statistical tool – that can speed up analysis of large datasets. Deep learning, then, is a small, more intense part of M, that is defined by how that statistical tool’s setup, functionality, and output. It is incorrect to use the terms ‘deep learning’ and ‘machine learning’ interchangeably. Both models do use statistics to explore data, extract useful meaning or patterns, and make predictions accordingly. Both models are a newer type of AI modeling that contrasts with classic rule-based algorithmic systems. But, these data modeling paradigms aren’t the same – deep learning can output information that is quicker to use and can seem closer to an AI we imagine. Let’s take a look. An overview of machine learning Machine learning is any approach that employs algorithms to sift through data and find patterns. Though a statistical process, it resembles a machine performing a specific mechanical function. The algorithm performs a function, set by the engineer or programmer, and then parses through the data to provide your answer. As the algorithmic model works its way through a given dataset, the model tends to get better at that function. Perhaps the algorithm has to sift through thousands of pictures of cars zooming through traffic lights to determine which were red lights, warranting a ticket, and which were not. The first few tries, the algorithms won’t get everything right, but over time, it will increase in accuracy, improving well beyond human error. Some examples of machine learning: - Spotify or Apple Music serving up new musicians you may like, based on your listening history alongside an aggregate of users with similar interests also like - A program seeking out malware - An app that can find favorable financial trades or opportunities Machine learning is automated, but only to a point. In machine learning, the programmer must still provide guidance, so that if the algorithm spits out a bad or wrong prediction, the programmer must step in and adjust. Going further with deep learning Any mention of deep learning will soon be followed by the term “neural networks”, the concept that deep learning is modeled on the human brain’s processing capabilities. This isn’t wholly incorrect, but this explanation tends to overstate the capabilities of deep learning. Here are the facts: deep learning is a subset of machine learning. Deep learning functions similarly to ML using algorithms and vast amounts of data, but its capabilities go far beyond ML, so its results seem more “intelligent” or sophisticated. Instead of one or two algorithms working at once, as in ML, deep learning relies on a more sophisticated model that layers algorithms. This is known as an artificial neural network, or ANN. It is this artificial neural network that is inspired, theoretically, by our own brains. Neural networks continually analyze data and update predictions, just as our brains are constantly taking in information and drawing conclusions. Deep learning examples include identifying faces from pictures or videos and recognizing spoken word. One major difference is that deep learning, unlike ML, will correct itself in the case of a bad prediction, rendering the engineer less necessary. For example, if a lightbulb had deep learning capabilities, it could respond not just to “it’s dark” but to similar phrases like “I can’t see” or “Where’s the light switch?” Choosing between the two A quick way to separate machine and deep learning? Machine learning uses algorithms to make decisions based on what it has learned from data. But deep learning uses algorithms – in layers – to create an artificial neural network that makes intelligent decisions on its own. (This doesn’t mean it’s sentient!) Both will parse through your data and improve over time, and both can utilize supervised and unsupervised algorithm models, of which there are many. While machine learning might feel less sophisticated than deep learning, it shouldn’t immediately get passed over in favor of the mightier deep learning. In fact, machine learning makes sense for smaller data sets and less complicated tasks or automation. Weakness of machine learning and deep learning Despite the tech world’s seeming obsession with machine learning and deep learning, experts are wondering whether these AI tools are truly as deep and intelligent as we first anticipated. Even in cutting-edge deep learning environments, successes thus far have been limited to fields that have two vital components: massive amounts of available data and clear, well-defined tasks. Fields with both, like finance and parts of healthcare, benefit from ML and data learning. But Industries where tasks or data are fuzzy are not reaping these benefits. When it comes to decision making, like predicting an election or writing a persuasive essay, deep learning may be pummeling directly into a technical wall. That’s because teaching common sense is a lot harder than teaching tasks. Common sense – perhaps a shorthand for thinking – is a broader, less tactile process that may produce vague outcomes. How do you teach an algorithm to understand concepts like reasoning, freedom, and wellness? Programmers are working on AI tools that don’t rely solely on machine or deep learning, rethinking our approach to and definition of “intelligence”. They’re seeking answers to questions that these models can’t comprehend, because they aren’t tasks. For instance, can an application go beyond recognizing words to understanding concepts? Can data be used be efficiently for different types of analyses? The takeaways are this: deep learning and machine learning are different. They are both tools within the wide world of artificial intelligence. But neither is the silver bullet to achieving AI; investing in other AI approaches is the key to its potential.
<urn:uuid:e0e07b84-755e-408c-89ac-9ac90fe5db0e>
CC-MAIN-2022-40
https://www.bmc.com/blogs/deep-learning-vs-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00498.warc.gz
en
0.937231
1,240
3.578125
4
Biotechnology is yet another area of industry and technology that relies heavily on stable, secure, and reliable Internet connectivity. In fact, biotechnology Internet, especially the Internet of Things (IoT), is experiencing a revolution in usage and data sharing. More than ever, companies and researchers are depending on the Internet for communication, operations, data-sharing, and information storage. There is no doubt that the future of healthcare is tied securely to the Internet of Things (IoT). While healthcare technology has always been on the rise, the global pandemic has accelerated digital medical device usage into the mainstream with patients and providers alike. The Medical IoT now influences all aspects of medical care – from diagnosis, measurement, and monitoring to surgery and patient care. The Internet of Things (IoT) is a host of physical devices around the world that collect and share data across the internet. The rise of wireless networks and cheap data processors makes it possible to turn anything into part of the IoT. This adds a certain level of digital intelligence to devices that enable them to communicate without human involvement, effectively merging the physical and digital worlds. In doing so, however, we have created a new concern for IoT internet requirements The proliferation of new technologies has led to a surge in the demand for bandwidth. For businesses to stay competitive, it's necessary to keep up with this demand. There are a few ways for companies to go about doing this. One of the biggest new developments in the business landscape of late is real-time communications. This is a technology that's offering up some opportunities for businesses to connect -- both within and without -- and carry on in ways that weren't possible just a few years ago. Even as these new opportunities have emerged, though, so too have new challenges cropped up. Is it time to get rid of the wide-area network (WAN)? Some would say it is, but almost as many would say that it's just time for a better WAN. Making a WAN better calls for some fairly serious changes, but those changes can make the system more ready to take on the ever-increasing demands of modern business. The Very Fabric of WANs Has Altered More and more networks are getting away from the old standard of multiprotocol label switching (MPLS), a big part of the WAN since the 1990s, and are moving instead to software-defined wide area networking (SD-WAN) technology. Many enterprise users are discovering that business-grade or even some consumer-facing internet connections are offering more bandwidth than standard WAN services. To pass up such options, therefore, would leave businesses at a marked disadvantage. Though not every business is moving in this direction, many are developing a hybrid WAN environment that calls on both the standard WAN and the overall internet to deliver the best in value. Changes in the Cloud The cloud in general is bringing some of the biggest changes to WAN. Greater scalability than ever is now readily accessible, and new options in apps are coming into play. - Increased scalability. With WAN and cloud systems working together, there's a better ability to take advantage of collocations and remote operations. Small and medium-sized business (SMB) users are particularly interested in this phenomenon, and more and more, the larger data center is pretty much a province of large-scale operations. - New app options. With increasing cloud-based options coming available, that means new apps available on every front. Using WAN to tap into those new apps opens up options ranging from big data analysis to customer relationship management (CRM) tools and more. Changes From the IoT The IoT is also representing a game-changing experience for WAN operations. Thanks to the IoT's nature as what amounts to an internet of interconnected systems, the end result is both opportunity and hazard. - Greater demand for security. With all this data flowing through a system, and more points than ever requiring access, the data prizes are richer and the means of access that much easier. A greater demand for security therefore, naturally follows. - Capacity management is almost as vital. New data is likely to stretch demand for bandwidth to a fever pitch. Trying to keep capacity straight will be vital to ensure the network can handle all the demands placed on it, making capacity management crucial. How Can I Manage All These Changes? This is just the tip of the iceberg when it comes to changes. While IoT and cloud systems will affect the WAN, MHO Networks has expertise in the connectivity that next-generation networks will require. Our experience in offering superior internet will help you take the greatest advantage of these new and powerful systems.
<urn:uuid:e786d51f-6228-44e1-83fd-059022adaa51>
CC-MAIN-2022-40
https://blog.mho.com/topic/iot-internet-of-things
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00498.warc.gz
en
0.94353
958
2.703125
3
Reddit user vizionx1208 posted about their Samsung Galaxy S III exploding and catching fire while being charged recently. The pictures show the battery afterwards and the case being charred. You can see more pictures of the case and aftermath or more pictures of the battery. As a result of the topic that was popular a few years ago coming up again, the topic of Lithium-Ion batteries catching on fire came up again. Therefore, it is wise to inform yourself of the right information on how you should react if your laptop, tablet, smartphone, or other electronic device catches fire suddenly. The airline industry, for all of its “Please turn off all electronics“, has some excellent resources on Lithium battery fires and how to respond. Aviatas, an aviation training company, has two slideshows with sound for different audiences that cover the topic full-circle. It explains lithium batteries, why they catch fire, how to prevent it, and how to fight a lithium battery fire. This could be single cells, battery packs, and multicell batteries. - For Crew Airline & Airport Staff (those that might encounter Lithium batteries) - For Non Crew Airline & Airport Staff Thermal runaway is the cause for Lithium batteries catching fire and damaged batteries are more likely to experience thermal runaway. Other reasons for a fire starting could be manufacturer fault, over charging, or over discharging. While a lithium battery fire might extinguish itself fairly quickly, it will spew molten lithium which may spread the fire. The preferred way of extinguishing a lithium battery is to: - Unplug the item if it is on a charger. - Use water to extinguish the fire and cool the cells at the same time. Use a halon extinguisher if a water extinguisher is not available. - Ensure the fire does not start up again by keeping the cells cool with water (not ice). The FAA has additional information about extinguishing handheld devices. The FAA also reports that as of October 9th, 2012 there have been 132 air incidents involving batteries since March 20th, 1991. The chart (.pdf) lists cargo and baggage incidents with batteries that they are aware of. Lithium batteries are very common but fires involving lithium batteries are rare. If you experience one, you are very unlucky but you should feel lucky since you have read this information and are now prepared to respond to a lithium battery fire.
<urn:uuid:e562f52e-4c90-402a-ba86-9cedbad48835>
CC-MAIN-2022-40
https://www.404techsupport.com/2013/05/29/how-to-fight-a-lithium-ion-battery-fire/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00498.warc.gz
en
0.928964
498
2.578125
3
The arrival of October on our calendars is the perfect reminder to check how well your digital assets are protected. Initiated 18 years ago, Cybersecurity Awareness or National Cybersecurity Month was created to encourage businesses to review their firewalls, anti-virus protection, and even remote access policies. One area often overlooked in cybersecurity awareness is the focus on user authentication policies—especially Multi-factor Authentication (MFA). This is becoming even more relevant due to a growing number of employees in a hybrid work environment. Too many businesses remain comfortable with using just the traditional username and password approach to authentication. This kind of complacency can be compared to leaving your door key under the welcome mat. Someone, in this case, the intruder, knows the fastest and easiest way to ‘get in’ is often through the access points we seem to ignore or are most comfortable. Assuming the intruder won’t find your ‘hidden’ door key under the mat equates to the same thinking that an email hacker won’t be able to crack the code on a password that is actually the word ‘password’ written on a sticky note left on the computer keyboard. A 2020 study on password and authentication discovered that 67% of organizations require a periodic password change, and 65% prohibited password reuse. Over 60% had a minimum password length, but only 36% required a password manager. According to the study, employees continue to write passwords on sticky notes and use the same password on up to 10 accounts. In some cases, individuals even shared their credentials with others. Traditional credential processes put your organization at risk. That’s why government and industry agencies recommend using multi-factor authentication (MFA) technology to provide a stronger security posture. What is Multi-Factor Authentication? Multi-factor authentication (MFA) is a security system that requires user credentials to come from at least two of the following categories: - A user-generated code such as a password or PIN - A program-generated code that appears on user property such as a smart card or phone - A biometric recognition process such as voice recognition or fingerprints MFA implementations may use two (2FA) or three (3FA) categories for authentication. Whether 2FA or 3FA, the added credentials make for a more secure environment. How Does Multi-Factor Authentication Work? Most implementations use 2FA authentication. In a two-category deployment, a secure password is entered with a username. Once the user passes the initial log-in, the second form of identification is requested. The second method may be a security code sent to a smartphone or email address associated with the account. Sending the code to user-owned devices satisfies the second category requirement. The second authentication method assumes that a hacker cannot access the device and complete the authorization process. Trying to spoof a randomly generated code is more difficult than compromising a password. An alternative to the random code is a biometric requirement. For this category, fingerprints, retina scans, or voice recognition can serve as user-specific identification. However, no authentication process is 100% effective. Anything can be compromised, given enough time. For example, emails and SMS can be intercepted. Biometrics can be hacked; even the authenticating applications can be compromised. Organizations should not reduce their security programs when upgrading to MFA. The technology is a tool that works in conjunction with a well-designed cybersecurity plan to protect digital assets. Why is Multi-Factor Authentication Important? Microsoft reports that over 99.9% of account compromise attacks can be prevented with MFA. Given that 63% of data breaches can be traced to poor credential hygiene, adding a step to authentication only makes business sense. After all, the cost of a breach can cripple an organization. According to IBM’s latest data breach report, businesses continue to feel the effects of a cyberattack two years after the compromise. Immediate expenditures cover detection, escalation, and notification, but longer-term costs include response and remediation. Response and Remediation The financial impact of the response and remediation phase is the most significant, primarily because it results in a loss of business. As IBM reports, financial losses come from the following: - Downtime – Business disruption can cost millions. A recent study found that an hour of downtime can cost a small business between $80,000 and $260,000 per hour. - Lost Business – About 56% of consumers will stop doing business with a company after a cyberattack. - New Customers – Factoring in the cost of acquiring new customers only adds to the financial burden of a data breach. Depending on the industry, post-response penalties can be severe. Businesses that accept online payments may incur per-transaction penalties if they are found to be out of compliance. Then, there are fines for violating consumer privacy laws if personal data is taken. As the report highlights, the public relations costs can drag on for years as companies try to ensure consumers and partners that their data is safe. Remote workers pose a serious threat to cybersecurity unless employers have a strong authentication process. Companies have little control over the security of at-home workers. How many devices use the same router? Has the password been changed from the default? For the most part, employers must depend on the security awareness of employees. Hackers are trolling the internet looking for weaknesses that will allow them to compromise a corporate system. With employees working from home or the local coffee shop, data is passing over an unsecured home or public network. Cybercriminals can easily extract credentials without anyone knowing they even tried. Organizations should be asking themselves how they determine employees are actually who they say they are. Anyone can use someone’s username and password to gain access, but it’s more difficult to enter a passcode or fingerprint without the individual being present. MFA is one method for knowing that employees are who they say they are. How Does MFA Strengthen Cybersecurity? MFA methodologies check user identities every time they log in from a different device. The process reduces the risk of compromised user credentials being used. Access is not granted unless users supply a passcode or use a fingerprint. Reducing the risk of compromised user credentials means minimizing the odds of a data breach. According to Google, adding MFA can prevent over 95% of bulk phishing attempts and over 75% of targeted attacks. Multi-factor authentication can help reduce: - 81% of breaches are the result of credential theft - 73% of passwords are used for more than one account - 50% of employees use shadow apps So, what cyber tricks do hackers use to gain access to user credentials? Hackers send messages to a list of email addresses or phone numbers with a call to action (CTA) at the end. The CTA requires users to go to a fake website to enter their usernames and passwords. Cybercriminals now have access to credentials that can log them into one or more systems. Similar to phishing, spear phishing targets a specific group using personalized messages. Hackers comb social media accounts and websites to learn more about the targeted individuals. When ready, they begin a campaign that leads to credential theft. The more data they have, the more trustworthy the communication appears. Cybercriminals install programs to capture keystrokes from the user’s computer. These programs capture everything from usernames and passwords to domain names and IP addresses. If the target is a privileged user, these bad actors can access an entire infrastructure. Hackers know that people reuse their usernames and passwords. As a matter of course, they use the stolen credentials to access other programs and sites. Merely watching social media accounts enables bad actors to guess where stolen credentials may be valid. Brute Force and Counter Brute Force Attacks Brute force attacks may lack finesse, but they can be effective. Cybercriminals deploy password-generating software that tries thousands of possible passwords in seconds. The program is looking for common credentials such as password123 to gain access to a system. These attacks use a third-party connection to get to a specific user. Whether they observe interactions or redirect connections, the bad actors are waiting for people to enter their login information. Cybercriminals are not lacking tools that compromise user credentials. If sophisticated enough, they might try to access a phone’s SIM card or decrypt a private connection. MFA technologies can eliminate most attempts at credential compromise, reducing the chance of a data breach or ransomware attack. Available Multi-Factor Authentication Tools There’s no shortage of MFA tools to help deploy the added credential security. Although most solutions target businesses, individuals can improve their security through MFA tools. MFA tools prevent internal theft, external threats, and data loss through various methodologies. For example: - Risk-based software uses factors such as IP address, domain reputation, device posture, and geolocation to assess user authentication risks. - Passwordless software is a form of MFA that can be used for authentication, using alternative factors for a user-generated component. MFA can be sold as an endpoint solution or as a cloud-based service. Businesses may consider identity and access management (IAM) software or customer-based identity and access management (CIAM) solutions. Finding the right tools for your business depends on many factors: - Security levels – Some solutions offer different authentication requirements based on user privilege, for example. Remote workers may have added security requirements to protect against identity theft. The right solution depends on what an organization requires. However, the practice of least privilege should be followed. - Ease of use – No matter how good a solution is, it won’t be if it is difficult to use. Some software may have such complex security level settings that administrators end up giving everyone the same access, violating the best practice of least privilege. - Existing workflows – Most organizations have some form of credential security. If the MFA solution disrupts workflows, employees become frustrated, and productivity declines. Make sure that the tool offers a method that minimizes disruption, so employees are more willing to adapt to the change easily. . - Cost – MFA solutions, enterprise-wide endpoint implementations, and cloud-based deployments can vary in cost, based on features, the number of users, and on-premise vs. cloud deployments. There’s no single right MFA tool, however, all tools should minimize the risk of credential compromise. The challenge is knowing which tools would work best for your specific business needs when it comes to your cybersecurity protection. While the overall threat of hacks is always looming, however, there are fast and relatively easy steps you can take today to shore up your vulnerabilities. We’d love to help you determine how to best improve your overall cybersecurity preparedness. Let’s talk about your needs and how to make sure cybercriminals don’t find the proverbial key left under the mat to gain access to your business when they should permanently be locked out.
<urn:uuid:1d5be490-6872-43cb-beb0-06a20fbcb2a9>
CC-MAIN-2022-40
https://gomachado.com/why-your-business-needs-multi-factor-authentication-mfa-now/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00498.warc.gz
en
0.922859
2,326
2.765625
3
The key principles of Six Sigma: What are they and why are they important? Aim of Six Sigma Six Sigma is an approach to reduce the defects in the processes of any kind. The motive of every business is to satisfy the customers and make a profit. These motives can be addressed by reducing the defects in the process. By reducing the defects, the business can produce more products and increase profit. By reducing variation in the process, the business can provide product or service whatever the customer needs. Two Methodologies of Six Sigma Six Sigma uses two methodologies to approach a problem. They are: 1. DMAIC - The DMAIC methodology has 5 phases; Define Measure Analyse, Improve and Control. It is used to improve the existing process. In this methodology problem is identified, the impact of the problem is measured and the root cause of the problem is analyzed. Then the root cause is eliminated in the existing process and the process is controlled so that it does not go back to the previous state. This controls the variation in the process. 2. DMADV - This methodology has 5 phases; Define Measure Analyse, Design and Verify. It is used to design the new process in such a way that no variation is there. This is called design for six sigma (DFSS). In this methodology problem is identified, the impact of the problem is measured and the root cause of the problem is analyzed. Then the system is designed eliminating the identified problem and verified. You may also like: Lean vs Six Sigma- What is the difference between them? DPMO - Target of every six sigma project The target of every six sigma project is to achieve a metric of 3.4 defects per million opportunities. The six sigma methodologies aim to make the system more sophisticated in such a way that the process will not produce more than 3.4 defects per million opportunities. Target may not be achieved overnight or in a single project. It takes a series of projects at different levels of businesses, which involves all the stakeholders of the business. Principles of Six Sigma There are 5 key principles of Six Sigma: 1. Focus on Customer Requirements: The initial phase in Six Sigma process is defining the “quality" from the point of customers. Every customer defines quality differently. A business needs to measure quality in a similar way its customers do. By addressing the needs of the customer business can define the quality to the customer. 2. Use data to identify the variation in the process: Variations in the process are of two types; Special cause variation and Natural variation. Special cause variation is caused by external factors. Natural variation is random variation present in the process. Six Sigma aims to reduce the special cause variation. To identify the root cause of the variation, understating the process is necessary. The understanding of the process should be deep and extensive. Knowledge about the process cannot be gained unless it is studied. To understand the process clearly, detailed data about the process is important. To collect detailed data: Define the goals for data collection clearly Identification of the data. Define the reason for the data collection Define the expected insight Define the data collection method Eliminate the error in the data collection by doing Measurement System Analysis Define the Data collection plan Data collection in the Six Sigma process includes interviewing with people, taking observations, and asking questions. Once the data collection is finished, check whether the collected data gives the required knowledge to meet the objectives which were set up. If not, repeat the data collection plan and get more information. The process is repeated to find answers. Identify the potential root causes for the variation in the process and try to eliminate it with the help of collected data. After identifying the potential root causes, analyze it. To identify the significant root cause that makes the variation, use Statistical analysis. You may also like: The 6 amazing benefits of employing Six Sigma in your organization 3. Continually improving the process to eliminate the variation: After identifying root causes, make changes to eliminate variation in the process. Thus the defects in the process are removed. Additionally, search for ways to remove steps that do not add value to the customer. This will eliminate waste in the process. Identify variation and eliminate it. Don't wait for the variation to show itself. Collect data, talk to people, and study the data to identify variations in the process. variation may have become the routine because “that's the way we've always done things.” 4. Involve people from different levels of management and process: Six Sigma is formed on the foundation of good teams. The good teams comprise of peoples who take responsibility for the Six Sigma processes. The people on the team need training in Six Sigma's methods. Forming a cross-functional team with peoples from different backgrounds will help identify variation. For example, six sigma team from health care project consists of top management people, doctors, nurses, managers, people from operations and purchasing. 5. Be flexible and thorough: Six Sigma requires adaptability from various points of view. The business’s management system needs to acknowledge positive changes. People should be motivated to accept the changes in the system to eliminate the variation. To motivate the employee, the benefits of the six sigma system should be made clear to all levels of employees. This will make the changes easily acceptable. Six Sigma also requires reducing variation to be thorough. So understand all the aspects of a process—the steps, stakeholders, and methods involved. This will help to ensure that any new or updated process works. You may also like: An Ultimate Guide to Control Charts in Six Sigma The importance of Six Sigma Principles: Six Sigma is such a popular approach for process management as it forms a way through which one can improve their processes. It is centrally a customer and product-driven approach. The main purpose of this approach is to avoid waste and delivering the product by meeting all the requirements of the customer thereby increasing customer satisfaction. To accurately analyze the process, Six Sigma requires precise data. The below goals also known as business success facts can be achieved by implementing Six Sigma in a company. Reduced Operation Cost High Customer Satisfaction Improved Employee Morale Efficient Work environment By implementing Six Sigma methodology, the business can reduce the variation in the process. With Six Sigma in the process, the business can focus clearly on customer requirement. It can adhere to customer requirements. By adhering to the customers’ requirement the business can improve its customer satisfaction. Highly satisfied customers bring more profits to the business. With the help of Six Sigma, the operation cost is reduced and it is converted to profit. It also improves the morale of the employees and comfortness of the workplace, this creates a happy and efficient workplace. This gives more productivity and quality to the product.
<urn:uuid:7261b9f7-7e46-4357-9984-fd155e5630fd>
CC-MAIN-2022-40
https://www.greycampus.com/blog/quality-management/principles-of-six-sigma
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00498.warc.gz
en
0.932693
1,458
2.921875
3
Microsoft Excel is a spreadsheet application that is capable of doing so many different types of things to values and data. Even if you use Excel frequently, there are still probably hundreds of things you have yet to learn how to do. This often happens with software as we learn what we need to do and move on. One thing that makes Excel extremely valuable is that even when you don't know half of what it can do, it can still be a super effective tool. Some software applications, such as Photoshop, can be overwhelming when you are unfamiliar with them even if you are only trying to do a simple thing. Excel is powerful like Photoshop, but it is more approachable for people learning to use it. This post discusses how to use nested IF statements to automate calculations for different types of data. How to Use Nested IF Statements in Excel to Automate Calculations IF statements operate much like functions though they also have some major differences. While functions can be complex, with multiple steps processing data, IF statements go even further. An IF statement can take data, compare it to certain criteria, then funnel the data to the appropriate function before delivering output. In this post we will use the example of selling items online and charging certain shipping rates based on the total sale price. We can use an IF statement to automatically calculate a flat rate price for items under a certain price as well as a percentage for larger orders. Specifically, the steps below will demonstrate how to set an IF statement to assign a flat rate shipping cost of $7.99 for all orders under $100, a flat rate of $15.99 for orders totaling between $100 - $275, and 6% for all orders over $275. Keep in mind both the numbers representing the flat rate and percentage shipping costs, as well as the limits set for each, can be changed to anything that suits your particular needs. This is simply a general example to show you how an IF statement works and one of the many ways they can benefit you. Follow the steps below to do this: - Open Excel and create headings for as many columns as desired. In this example, to keep it simplified, I have created two columns - one for the cost of an order and one for the associated shipping fees our IF statement will calculate for us. - Click into the cell where you want to place the IF statement. Click the fx button above the spreadsheet. This will automatially enter an equals sign (=) into the cell and bring up the Insert Function box. In my example above, the IF statement is in my history, but your results will vary based on what you have used in the past. If the IF function is not listed in the box under "Select a function:", follow these steps to search for it: - Type "IF" in the box at the top under "Search for a function:" and click the "Go" button to search. - Click on "IF" in the box under "Select a function:" and click the "OK" button. Once the IF function has been selected, you will be prompted with the Function Arguments box. - From here, type the first parameter into the "Logical_test" box. In our example, this would be A3<100 and "$7.99" in the box next to "Value_if_true". Excel reads this as if the order total placed in cell A3 is less than 100, the value returned in cell A4 where the formula is should be $7.99. This is only one portion of the values we want to return, but if we type a number that falls within the criteria entered so far, we will get the correct result. Now we simply need to add additional IF statements, in a nested format, so that the order totals can be calculated and present our desired output, regardless of the amount entered into the order total column. - In the Function Arguments box, click in the "Value_if_false" box. NOTE: These formulas can also be typed into the box above the cells next to the function button you clicked earlier. Just pay close attention to the syntax which is shown in the image examples below. - Continuing with our example, we are going to type in the next IF statement. For our example, our next criteria is to charge $15.99 for shipping on orders that total between $100 and $275. To do this, we type IF(A3<276,"$15.99",. This is read by Excel as "If the value in cell A3 is less than 276, output $15.99 in the cell where the formula exists". If this were the end of our IF statements, we would close the parenthesis. However, in our case we have one more criteria to consider so we end with a comma instead. - To include our shipping rates for orders totaling over $275, we add the following IF statement: IF(A3>275,A3*.06)). This is read by Excel as "If the value in cell A3 is greater than $275, multiple the value by 6% and enter the result in the cell where the formula exists". There are two end parenthesis to match the two IF statements in that box. NOTE: In the image below you will see three parenthesis ending the IF statement in the box next to where you started the function. This is because all the IF statements are listed in a row together. Now with all the criteria entered, when you enter a value in the order total box, the appropriate shipping value is calculated in the Shipping Fees cell next to it. Excel is a very powerful tool because it can save you lots of time. However, it takes an understanding of how the application works and some planning to properly set up a spreadsheet before this happens. The first step is figuring out what criteria you have. This will help you create the logical statements to process the values. Always check your break points in values to be sure they are working as expected. In our example above, you can see I checked the following values for this reason: 99, 100, 275, 276 in addition to various other numbers. As always, taking time to do the planning will save you a great deal of time later!
<urn:uuid:4560d326-5efb-4098-b8e2-9490b4effbf7>
CC-MAIN-2022-40
https://blogs.eyonic.com/how-to-use-if-statements-in-excel-to-automate-calculations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00498.warc.gz
en
0.930196
1,290
3
3
Blockchain technology might be best known for its use with cryptocurrencies such as Bitcoin and Dogecoin, but that’s just one type of blockchain. There are other varieties that could prove useful in certain sectors. Let’s take a look at what they are, how they might be used, and what some of their benefits and shortcomings are. The largest benefit of the blockchain, which is essentially a decentralized ledger of transactions, can be seen in Bitcoin, but the shortcomings are also notable. Blockchains consume a considerable amount of energy to operate, making them difficult at best for businesses to take advantage of. Bitcoin operates using what is called the public blockchain; as such, it cannot store sensitive information or proprietary data without putting it all at considerable risk. Here are the four varieties of blockchains that organizations can utilize. The public blockchain is the most open form of blockchain, and anyone can participate in transactions and maintain their own copy of the ledger. The only prerequisite is a connection to the Internet. The public blockchain was the first type created, and it is the most common one used by cryptocurrencies, but it has other applications that could be considered in the future, such as voting and fundraising. All of these uses are only possible due to the openness of the system. While the openness is a great benefit to the public blockchain, there are other challenges that can get in the way of its use–namely the fact that these transactions happen at a slow rate, which also limits the scope of the network in question. Rather than being accessible to all, a private blockchain is a closed network that is maintained by a single central entity. Unlike the issues with the public blockchain, the private blockchain has greater security and trust within its own operations. Besides this difference in centralization, the private and public blockchains are similar in functionality. The efficiency of this centralized system makes the entire blockchain operate more smoothly, but at the same time, security is hindered somewhat. Some of the key uses for a private blockchain include supply chain management, internal voting, and asset ownership–all uses that really want that security. It is critical that any organization seeking to implement a private blockchain consider this weakness. When you combine the public and private blockchains, you get a solution that can leverage the advantages of both. A hybrid blockchain allows users to connect to the public network without sacrificing privacy. Organizations can use customizable rules to keep data secure. There are some downsides to this solution, though. A hybrid blockchain lacks the transparency of other blockchains, and as such, there is no prerogative for organizations to go through the adoption process. Despite this, there are some notable uses for a hybrid blockchain. For example, industries like real estate and retail might find it palatable. Similar to the hybrid blockchain, a federated blockchain combines benefits offered by the public and private blockchains, keeping some records open while securing others. This is beneficial because multiple organizations might get value out of the network, and thus, keeping it decentralized works in their favor. The federated blockchain is both customizable and efficient, but even with the use of access controls, this blockchain is more vulnerable, less transparent, and less anonymous than the others. Ideas for how to utilize the federated blockchain include banking, research, and food tracking. Have you considered the use of blockchain technology for your organization? The latest blockchain technology solutions can be a great boon for your business if implemented properly. Contact AE Technology Group for an IT consultation; let our technicians help you determine the best path forward. To learn more, reach out to us at (516) 536-5006.
<urn:uuid:33692028-459d-47f8-aeff-4d56cc3c1566>
CC-MAIN-2022-40
https://www.aetechgroup.com/blockchain-technology-solutions-for-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00498.warc.gz
en
0.945376
751
2.984375
3
WASHINGTON, July 2, 2019 — About 80 million Americans in unserved or underserved locations could receive gigabit broadband access through spectrum sharing without harming existing operations, according to research conducted by Jeffrey Reed, Professor of Electrical and Computer Engineering at Virginia Tech. Reed presented a summary of the study results at a press event on Tuesday sponsored by the Wireless Internet Service Providers Association, prior to filing them to the Federal Communications Commission C-band docket. The FCC is currently looking at changing or expanding the use of the 3.7 to 4.2 GigaHertz satellite downlink band. There has been significant debate over the portion of the band that would potentially be cleared and made available at auction. But less attention has been given to the remaining uncleared portion, which will house remaining satellite operations. Without harming these operations, technologists claim, this same portion of the band could be used to provide multi-hundred-megabit broadband service over fixed wireless access links. Spectrum sharing will bring more rural Americans into the digital economy, enabling more robust precision agriculture, distance learning, telemedicine, and more, said Claude Aiken, CEO of the Wireless Internet Service Providers Association. According to Reed’s study, which was sponsored by WISPA, Microsoft and Google, exclusion zones of about 10 kilometers are sufficient to protect fixed-satellite service earth stations from harmful interference caused by co-channel point-to-multipoint broadband systems. These so-called “P2MP” systems operating outside of the exclusion zones could provide gigabit broadband access to 80 million Americans, particularly in rural areas, the advocates said. The study results demonstrate that P2MP systems can operate co-channel with existing earth stations, said Google’s Spectrum Engineering Lead Andrew Clegg. Repacking the C-band will not affect these results because they assume co-channel sharing with all 18,000 registered earth stations. The only criterion that matters is the location of the earth stations, and the P2MP systems can be carefully and deliberately designed around them. Many other plans have been proposed for the use of this valuable C-band spectrum. On Tuesday, entities representing incumbent and prospective users of the C-band spectrum submitted a proposal to the FCC asking for increased spectrum to be reallocated for 5G services. That proposal will require physical re-farming of earth station facilities, taking a significant amount of time, said Clegg. By contrast, the spectrum sharing plan will provide benefits to as many people as possible, as quickly as possible. Aiken pointed out that these proposals are not necessarily mutually exclusive. A sharing component with sufficient spectrum to ensure robust rural broadband service can and should be a part of any final proposal from the FCC, he said. In closing, Clegg quoted FCC Commissioner Michael O’Rielly: “We no longer have the luxury of over-protecting incumbents via technical rules, enormous guard bands, or super-sized protection zones. Every megahertz mused be used as efficiently as possible.” (Photo of Dr. Andrew Clegg by Emily McPhie.) NTCA Smart Rural Communities, International Telecommunications Union Conference, Carr on TikTok September 26, 2022 –Rural Broadband Association CEO Shirley Bloomfield on Monday announced a partnership with the National Rural Education Association to promote educational opportunities for rural children. Speaking at the launch of the NTCA trade show in San Francisco on Monday, Bloomfield said that the program will help educate kids about the value of rural broadband services. Bloomfield said it will help address a common lament in rural areas: “How do we make sure that you can keep that home grown talent?” The pilot program with the rural education group will help promote the importance of broadband jobs in rural areas. Telecom officials to be in Hungary for ITU election Key telecom agency officials are expected this week to attend the International Telecommunications Union conference, where the election of the new head of the United Nation’s telecom regulator will be selected. Federal Communications Commission Chairwoman Jessica Rosenworcel, FCC Commissioner Geoffrey Starks, head of the National Telecommunications and Information Administration Alan Davidson, and deputy secretary of the Commerce Don Graves are expected in Bucharest, Romania, where American Doreen Bogdan-Martin is in the running against Russian challenger Rashid Ismailov. Last week, President Joe Biden said he strongly supports the candidacy of Bogdan-Martin. The ITU develops international connectivity standards in communications networks and improving access to information and communication technologies for underserved communities worldwide. The conference is being held from September 25 – 29. The FCC expressed concerns over TikTok security and big tech contributions FCC Commissioner Brendan Carr said in a statement Monday that he spoke with European Union officials in Brussels about the need for Big Tech to contribute to the development of broadband networks and about the alleged security risks of the Chinese video-sharing app TikTok. Carr has previously said that big technology companies should contribute to the Universal Service Fund, a roughly $10-billion pot of money that goes to support basic telecommunications builds across the nation. Money for the fund comes from voice service providers, but critics have said that the fund’s base of contributors needs to be broadened for its sustainability. Carr also reiterated his position that TikTok poses a security and privacy threat to Americans. “TikTok functions as a sophisticated surveillance tool that harvests extensive amounts of personal and sensitive data,” he said in the statement. “And recent reporting indicates that there is no check on this sensitive data being accessed from inside China.” The security of TikTok has been an ongoing issue, with American Senators saying that TikTok may be collecting biometric data and storing it in an unknown database. Reason 4 to Attend Broadband Mapping Masterclass: Measuring Actual Speeds The 4th of 5 reasons to attend the Broadband Mapping Masterclass with Drew Clark on 9/27 at 12 Noon ET WASHINGTON, September 26, 2022 – The fourth reason to attend the Broadband Mapping Masterclass with Drew Clark on September 27, 2022, is to understand the role that speed tests are playing in the discussion about actual speeds versus available speeds – and its importance for federal and state efforts to distribute broadband infrastructure funds. Broadband Breakfast is hosting the 2-hour Broadband Mapping Masterclass to help Internet Service Providers, mapping and GIS consultants, and people in everyday communities concerned about broadband mapping. This 2-hour Masterclass, available for only $99, will help you navigate the treacherous waters around broadband mapping. The live Broadband Mapping Masterclass is being recorded, and those who make a one-time $99 payment will obtain a guaranteed place during the live session. ENROLL TODAY for our Zoom Webinar through PayPal. Registrants will also receive unlimited on-demand access to the Masterclass recording. And they will receive Broadband Breakfast’s premium research report on broadband mapping. We’re presenting five additional reasons to attend the Broadband Mapping Masterclass. Additional reason number 4 to attend the Masterclass The last time that the federal government initiated a significant effort to fund broadband, in 2009, the United States lacked a basic map of what we at Broadband Breakfast have for years called the Broadband SPARC: Measuring Speeds, Prices, Availability, Reliability and Competition by high-speed internet access providers. The National Broadband Map was a first effort to measure availability and competition by displaying the individual providers that offered broadband on a Census block level. But it lacked any measure of broadband speeds, prices or the reliability of such information. Over the past 13 years, we now have a great variety of robust sources of speed test data – as well as significant datasets with information about pricing and reliability of broadband. The Broadband Mapping Masterclass will explore ways in which actual speed data has and can be used to crosscheck the quality of broadband availability data released by the Federal Communications Commission. By attending the Broadband Mapping Masterclass, you’ll learn what you need to know in order assess the quality of broadband data as made availability by federal and state agencies, and private companies and organizations. ENROLL TODAY to find out what happens next. Read more about the reasons to attend the Broadband Mapping Masterclass - n 1 to Attend Broadband Mapping Masterclass: Ripping the Fabric - Reason 2 to Attend Broadband Mapping Masterclass: Aren’t Their Other Databases? - Reason 3 to Attend Broadband Mapping Masterclass: State Maps vs. Federal Maps - Reason 4 to Attend Broadband Mapping Masterclass: Measuring Actual Speeds - Reason 5 to Attend Broadband Mapping Masterclass: Understanding Public Challenges Dianne Crocker: Recession Fears Have Real Estate Market Forecasters Hitting the Reset Button Growing fears of recession trigger pullback on previous rosy forecasts. The lyrics to “Same As It Ever Was” by the Talking Heads certainly don’t apply to how 2022 is playing out in the commercial real estate market. Two quarters of negative economic growth has put a damper on market sentiment and triggered fears that the U.S. economy is heading for a recession. By midyear, market analysts were taking a good, hard look at their rosy forecasts from the start of the New Year and redrawing the lines. Once upon a time… At the start of 2022, forecasters were bullishly predicting that commercial real estate investment and lending levels would be nearly as good as 2021. This was significant, considering that 2021 set new records for deal-making and lending volume as the debt and equity capital amassed during the pandemic while looking for a home in U.S. commercial real estate. What a difference a few quarters have made. Virtually, all the predictions that started the New Year were obsolete by mid-summer. The abrupt shift in market conditions is palpable and surprised just about everyone. Now, markets are reaching an inflection point that is in sharp contrast with the strong rebound of last year. The two I’s: Inflation and interest rates At the core of the recent upset in market sentiment is the persistence of high inflation, which seems to be ignoring all attempts by the Federal Reserve to raise interest rates and bring prices down. Higher inflation is having a ripple effect throughout the economy, pushing up the costs of construction materials, energy, and consumer goods. Among the notable economic indicators showing stress at mid-year was the GDP, which fell for the second consecutive quarter, and the Consumer Price Index, which jumped 9.1% year-over-year in June – the highest increase in about four decades. In July, the CPI fell to 8.5%, an encouraging sign that inflation was beginning to stabilize. By the latest August report from LightBox, however, hopes were dashed when the CPI showed little improvement, holding firm at a still high of 8.3%. The market is responding to a higher cost of capital as lenders tap the brakes. As the cost of capital rises with each interest rate hike and concerns of a recession intensify, many large U.S. financial institutions are pulling back on their loan originations for the rest of 2022 and into 2023. This change in tenor is a significant shift, given that 2021 was a record-breaking year for commercial real estate lending. Many lenders have already shifted to a more defensive underwriting position as they look to mitigate risks. The Mortgage Bankers Association, which had previously predicted that lending levels in 2022 would break the $1 trillion mark for the first time revised their forecast downward in mid-July. By year-end, the MBA now expects volume to be a significant 18% below 2021 levels—and one-third lower than the bullish forecast made in February. Now, investment activity is cooling as higher borrowing costs drive some buyers from the market. In the investment world, transactions were down by 29% at midyear due to a thinning buyer pool as higher rates impact access to debt capital. Market volatility is causing investors, lenders, and owners to rethink strategies, reconsider assumptions, and prepare for possible disruption. Looking ahead to year-end and 2023 The rapid and diverse shifts in the market make for an uncertain forecast and certainly a more cautious investment environment. The battle between inflation and interest rates will continue over the near term. As LightBox’s investor, lender, valuation, and environmental due diligence clients move toward the 4th quarter—typically the busiest quarter of the year–unprecedented volatility is driving them to recalibrate and reforecast given recent market developments. Continued softness in transaction volume is likely to continue as rates and valuations establish a new equilibrium. If property prices begin to level out, there will be more pressure on buyers to consider how to improve a property to get their return on investment. The next chapter of the commercial real estate market will be defined by how long inflation sticks around, how high interest rates go, and whether the economy slips into a recession (and how deeply). The greatest areas of opportunity will be found in asset classes like office and retail that are evolving away from traditional uses and morphing to meet the needs of today’s market. Until barometers stabilize, it’s important to rethink assumptions, watch developments, and recalibrate as necessary. Dianne Crocker is the Principal Analyst for LightBox, delivering strategic analytics, best practices in risk management, market intelligence reports, educational seminars, and customized research for stakeholders in commercial real estate deals. She is a highly respected expert on commercial real estate market trends. This piece is exclusive to Broadband Breakfast. Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to firstname.lastname@example.org. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC. - NTCA Smart Rural Communities, International Telecommunications Union Conference, Carr on TikTok - Kirsten Compitello: The Need for a Digital Equity Focus on Broadband Mapping - Reason 4 to Attend Broadband Mapping Masterclass: Measuring Actual Speeds - Senate Indian Affairs Committee Chair Takes FCC to Task for Communication With Tribes - Dianne Crocker: Recession Fears Have Real Estate Market Forecasters Hitting the Reset Button - Kenosha Gets Fiber, Judiciary Committee Advances Journalism Bill, Rosenworcel Touts Women in Tech Signup for Broadband Breakfast Broadband Roundup4 weeks ago Comcast and Charter’s State Grants, AT&T Fiber in Arizona, New US Cellular Lobbyist Broadband Roundup3 weeks ago AT&T Sues T-Mobile Over Ad, Nokia Partners with Ready, LightPath Expanding Broadband Roundup3 weeks ago Promoting Affordable Connectivity Program, Google Bars Truth Social, T-Mobile Wins 2.5 GHz Auction #broadbandlive3 weeks ago Broadband Breakfast on September 21, 2022 – Broadband Mapping and Data Fiber3 weeks ago Missouri City Utility to Complete Fiber Build Using Utility Lease Model #broadbandlive3 weeks ago Broadband Breakfast on September 14, 2022 – How Can Cities Take Advantage of Federal Broadband Funding? Rural4 weeks ago FCC Commits Additional $800 Million From Rural Digital Opportunity Fund #broadbandlive3 weeks ago Broadband Breakfast on September 7, 2022 – Assessing the NTIA’s Middle Mile Grant Application Process
<urn:uuid:4b59ab17-7c90-4ce5-aecb-5fb25fb4941a>
CC-MAIN-2022-40
https://broadbandbreakfast.com/2019/07/techies-tout-plan-to-share-the-c-band-and-bring-high-speed-wireless-access-to-millions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00698.warc.gz
en
0.929237
3,242
2.71875
3
Learn how routers securely connect your small business to the rest of the world and connect your devices, including laptops and printers, to each other. Routers connect computers and other devices to the Internet. A router acts as a dispatcher, choosing the best route for your information to travel. It connects your business to the world, protects information from security threats, and can even decide which computers get priority over others. A router helps you connect multiple devices to the Internet, and connect the devices to each other. Also, you can use routers to create local networks of devices. These local networks are useful if you want to share files among devices or allow employees to share software tools. If you don’t have routers, your business's data won’t get directed to the right place. For example, if you'd like to print a document, you need a router to help get that document to a printer—not to another computer or a scanner. A modem connects your business to Internet access via your internet service provider (ISP). A router, on the other hand, connects many devices in a network—including modems. With a router in place, modems and other devices can transfer data from one location to another. Wired routers usually connect directly to modems or wide-area networks (WANs) via network cables. They typically come with a port that connects to modems to communicate with the Internet. Routers can also connect wirelessly to devices that support the same wireless standards. Wireless routers can receive information from and send information to the Internet. Routing is the ability to forward IP packets—a package of data with an Internet protocol (IP) address—from one network to another. The router's job is to connect the networks in your business and manage traffic within these networks. Routers typically have at least two network interface cards, or NICs, that allow the router to connect to other networks. Routers figure out the fastest data path between devices connected on a network, and then send data along these paths. To do this, routers use what's called a "metric value," or preference number. If a router has the choice of two routes to the same location, it will choose the path with the lowest metric. The metrics are stored in a routing table. A routing table, which is stored on your router, is a list of all possible paths in your network. When routers receive IP packets that need to be forwarded somewhere else in the network, the router looks at the packet's destination IP address and then searches for the routing information in the routing table. If you are managing a network, you need to become familiar with routing tables since they'll help you troubleshoot networking issues. For example, if you understand the structure and lookup process of routing tables, you should be able to diagnose any routing table issue, regardless of your level of familiarity with a particular routing protocol. As an example, you might notice that the routing table has all the routes you expect to see, yet packet forwarding is not working as well as expected. By knowing how to look up a packet's destination IP address, you can determine if the packet is being forwarded, why the packet is being sent elsewhere, or whether the packet has been discarded. When you need to make changes to your network's routing options, you log in to your router to access its software. For example, you can log in to the router to change login passwords, encrypt the network, create port forwarding rules, or update the router's firmware. Routers help give employees access to business applications and therefore improve productivity—especially for employees who work remotely or outside main offices. Routers can also enable specialized services such as VoIP, video conferencing, and Wi-Fi networks. With routers in place, your business can improve responses to customers and enable easier access to customer information. These are real benefits at a time when customers demand fast answers to questions, as well as personalized service. By using routers to build a fast and reliable small business network, employees are better able to respond rapidly and intelligently to customer needs. Routers can have a positive impact on your bottom line. Your small business can save money by sharing equipment such as printers and servers, as well as services such as Internet access. A fast and reliable network built with routers can also grow with your business, so you don't have to keep rebuilding the network and buying new devices as the business expands. Routers can help you protect valuable business data from attacks if they offer built-in firewalls or web filtering, which examines incoming data and blocks it as needed. Routers help your business provide secure remote access for mobile workers who need to communicate with other employees or use business applications. This is a common scenario for many businesses that have virtual teams and home-based telecommuters who need to share critical business information at any time of the day or night. Consumer or home networking products won't keep pace with the challenges of business growth. This way, you can add features and functionality when needed, such as video surveillance, VoIP, integrated messaging, and wireless applications. This provides the business continuity you'll need to bounce back quickly from unforeseen and disruptive events, like natural disasters. Our resources are here to help you understand the security landscape and choose technologies to help safeguard your business. Learn how to make the right decisions for designing and maintaining your network so it can help your business thrive.
<urn:uuid:fe7a2a76-5baf-4e24-aec4-31d6fd063711>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/solutions/small-business/resource-center/networking/how-does-a-router-work.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00698.warc.gz
en
0.946092
1,127
3.71875
4
AI Terms You Need To Know In 1956, Dartmouth Professor John McCarthy organized a conference called “Cerebral Mechanisms in Behavior.” It was about how computers could be used to think like the human brain. McCarthy also did something else that was very important at the conference: He coined the phrase “artificial intelligence.” Although, many of the participants were not particularly thrilled with this. But then again, no one could come up with anything better! Since then, the field of AI would go on to spawn many other AI terms, words, and acronyms. Many of them were technical or would just fade away. But of course, other AI terms have become major categories. Here’s a look: Machine Learning (ML): This often gets confused with the phrase AI. But there are key differences. Keep in mind that AI describes the broad category of intelligence machines and software. Then there are various subsets and one of them is ML. The roots of this category goes back to the late 1950s. At the time, IBM developer Arthur Samuel created the first ML program, which allowed a person to play checkers against a computer. However, he did not use the typical if/then/else structure for this since. He believed this was too inflexible. Instead, Samuel relied on processing data. This made it possible for a computer to understand how to play better chess. Samuels defined ML as a “field of study that gives computers the ability to learn without being explicitly programmed.” And yes, the category has gone on to be one of the most important in AI. Deep Learning: This often gets confused with the terms AI and ML. So what are the differences? Consider that deep learning is a subset of ML. It is also where much of the innovation of AI has happened during the past decade. The pioneering efforts of academics like Geoffrey Hinton, Yoshua Bengio, and Yann LeCun have been critical for the success. They leveraged neural networks, which process data by assigning a stream of weights to the items. The “deep learning” part of this is where there are many hidden layers that provided more sophisticated analysis. This is what has helped with breakthrough applications like self-driving automobiles, advanced fraud protection and virtual assistants like Siri and Alexa. Natural Language Processing (NLP): This leverages AI for recognizing speech. This was one of the early applications of the technology but progress was tough. Yet during the past ten years, there have been major strides with NLP. Consider that a key to this has been the use of deep learning. In the corporate world, NLP has been essential for the growth of chatbots. These systems have been a big help in improving customer support. According to Grand View Research, the spending is forecasted to hist $1.25 billion by 2025, with the CAGR (compound annual growth rate) of 24.3%. Explainability: Sophisticated AI systems like deep learning are often called “black boxes.” This means that it is far from clear why the models are coming up with certain predictions and insights. Unfortunately, this can make the technology difficult to pass muster with regulators. What can be done? Well, there is something known as explainability. This uses sophisticated models to detect the root causes for AI models. Keep in mind that this is an emerging category – but it is showing promise – and is likely to be a strong growth area of the market. Generative Adversarial Network (GAN): This is one of the most recent innovations with AI. The developer of the GAN is Ian Goodfellow, a PhD in machine learning. The story of how he came up with this concept is certainly interesting. It was while he was in Montreal in 2014. Goodfellow talked with friends about how deep learning could create photos. Then when he went back home, he started coding this up. The idea was to have two AI models compete against each other – and this could ultimately create content. The GAN would get instant traction. Goodfellow would then become one of the most recruited AI experts in the world and went to work for Google and Apple. Supervised and Unsupervised Learning: Supervised learning is the traditional approach to AI. This involves using labeled data to come up with the models. But this has some limitations. Note that most data is unstructured and this means it needs to be labeled. Oh, and the labeling process is often labor-intensive. Yet there is another approach: unsupervised learning. With this, a model will not need any labels. Instead, it will find the inherent patterns in the data (say by identifying clusters). In some cases, the AI can be used to create the labels as well. Reinforcement Learning: This is based on how people typically learn something – that is, by trial-and-error. Think of it as a reward-punishment system. So far, reinforcement learning has been focused on game playing. For example, DeepMind used it to beat the world champion of Go. But in the coming years, reinforcement learning could become incredibly important for commercial applications, such as for robotics and NLP.
<urn:uuid:7ca7cae7-7dad-4b1d-9e71-59b2cd68fb88>
CC-MAIN-2022-40
https://aisera.com/blog/ai-terms-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00698.warc.gz
en
0.973751
1,090
3.46875
3
Artificial Intelligence (AI) in Cybersecurity AI and Cybersecurity Artificial intelligence (AI) enables machines to perform tasks that typically require human intelligence, including making decisions, recognizing human speech, perceiving visual elements, and translating languages. AI uses training data to comprehend context and determine how to respond or react in different situations. Artificial intelligence in cybersecurity is increasingly critical to protecting online systems from attacks by cyber criminals and unauthorized access attempts. If used correctly, AI systems can be trained to enable automatic cyber threat detection, generate alerts, identify new strands of malware, and protect businesses’ sensitive data. Benefits of artificial intelligence in cybersecurity include leveraging AI techniques—such as deep learning, machine learning (ML), knowledge representation and reasoning, and natural language processing—for a more automated and intelligent cyber defense. In this way, organizations can discover and mitigate the thousands of cyber events that they can come across daily. Is It Safe To Automate Cybersecurity? Strengthening cybersecurity currently requires human intervention. However, tasks such as system monitoring can be automated through AI. Automating the process will increase organizations’ threat intelligence capabilities and save them time discovering new threats. This is vital as cyberattacks increase in sophistication. Cybersecurity automation using AI is safe because it is built on existing use cases in various business environments. For example, human resources (HR) and information technology (IT) teams use AI to onboard new employees and provide them with the resources and appropriate level of access to do their job effectively. Automation is particularly important in cybersecurity given the ongoing shortage of expert security staff. This allows organizations to enhance their security investments and improve operations without having to worry about finding additional skilled personnel. The benefits of automating AI in cybersecurity include: - Cost-efficiency: Pairing cybersecurity with AI results in faster data collection. This makes incident management response more dynamic and efficient. It also removes the need for security professionals to carry out manual, time-consuming tasks so they can focus on more strategic activities that add value to the business. - Removing human error: A common weakness of traditional security defenses is the need for human intervention, which can lead to costly human error. Artificial intelligence in cybersecurity removes the human element from most security processes. This is a more efficient approach because human resources can be reallocated to where they are most required. - Better decision-making: Automating cybersecurity helps organizations identify and correct potential deficiencies in their security strategy. In this way, they are able to implement formalized procedures that can result in more secure IT environments. However, organizations also need to be aware that cyber criminals adjust their methods to resist new AI cybersecurity tools. Hackers also use AI to create advanced attacks and deploy new and updated forms of malware to target both traditional and AI-enhanced systems. How Can AI Help Prevent Cyberattacks? AI in cybersecurity reinforces cyber threat intelligence, enabling security professionals to: - Search for characteristics of cyberattacks - Strengthen their defenses - Analyze data—such as fingerprints, typing styles, and voice patterns—to authenticate users - Discover clues as to the identity of specific cyberattackers Applications of AI in Cybersecurity Password Protection and Authentication With AI in cybersecurity, organizations can better protect passwords and secure user accounts through authentication. Most websites include features that allow users to log in to purchase products or contact forms for people to input sensitive data. Extra security layers are necessary to keep their information secure and prevent it from getting into the hands of malicious actors. AI tools, such as CAPTCHA, facial recognition, and fingerprint scanners enable organizations to automatically detect whether an attempt to log in to a service is genuine. These solutions help prevent cybercrime tactics like brute-force attacks and credential stuffing, which could put an organization’s entire network at risk. Phishing Detection and Prevention Control Phishing remains one of the biggest cybersecurity threats facing businesses across all industries. AI within email security solutions enables companies to discover anomalies and indicators of malicious messages. It can analyze the content and context of emails to quickly find whether they are spam messages, part of phishing campaigns, or legitimate. For example, AI can quickly and easily identify signs of phishing, such as email spoofing, forged senders, and misspelled domain names. ML algorithm techniques allow AI to learn from data to make analysis more accurate and evolve to address new threats. It also helps AI better understand how users communicate, their typical behavior, and textual patterns. This is crucial to preventing more advanced threats like spear phishing, which involves attackers attempting to impersonate high-profile individuals like company CEOs. AI can intercept suspicious activity to prevent a spear-phishing attack before it causes damage to corporate networks and systems. As cyber criminals deploy more sophisticated methods and techniques, thousands of new vulnerabilities are discovered and reported every year. As a result, businesses struggle to manage the vast volume of new vulnerabilities they encounter every day, and their traditional systems cannot prevent these high-risk threats in real time. AI-powered security solutions such as user and entity behavior analytics (UEBA) enable businesses to analyze the activity of devices, servers, and users, helping them identify anomalous or unusual behavior that could indicate a zero-day attack. AI in cybersecurity can protect businesses against vulnerabilities they are unaware of before they are officially reported and patched. Network security involves the time-intensive processes of creating policies and understanding the network’s topography. When policies are in place, organizations can enact processes for identifying legitimate connections versus those that may require inspection for potentially malicious behavior. These policies can also help organizations implement and enforce a zero-trust approach to security. However, creating and maintaining policies across multiple networks requires a significant amount of time and manual effort. Organizations often do not deploy the correct naming conventions for their applications and workloads. This means security teams may have to spend more time determining which workloads belong to specific applications. AI learns organizations’ network traffic patterns over time, allowing it to recommend the right policies and workloads. With behavioral analytics, organizations can identify evolving threats and known vulnerabilities. Traditional security defenses rely on attack signatures and indicators of compromise (IOCs) to discover threats. However, with the thousands of new attacks that cyber criminals launch every year, this approach is not practical. Organizations can implement behavioral analytics to enhance their threat-hunting processes. It uses AI models to develop profiles of the applications deployed on their networks and process vast volumes of device and user data. Incoming data can then be analyzed against those profiles to prevent potentially malicious activity. Benefits of Artificial Intelligence (AI) in Managing Cyber Risks Implementing AI in cybersecurity offers a wide range of benefits for organizations looking to manage their risk. Typical benefits are: - Ongoing learning: AI’s capabilities constantly improve as it learns from new data. Techniques like deep learning and ML enable AI to recognize patterns, establish a baseline of regular activity, and discover any unusual or suspicious activity that deviates from it. AI’s ability to learn on an ongoing basis makes it more difficult for hackers to circumvent an organization’s defenses. - Discovering unknown threats: As cyber criminals devise more sophisticated attack vectors, organizations are left vulnerable to unknown threats that could cause massive damage to networks. AI provides a solution for mapping and preventing unknown threats, including vulnerabilities that have yet to be identified or patched by software providers. - Vast data volumes: AI systems can handle and understand vast amounts of data that security professionals cannot. In this way, organizations can automatically discover new threats among vast amounts of data and network traffic that might go undetected by traditional systems. - Improved vulnerability management: In addition to discovering new threats, AI enables organizations to manage vulnerabilities better. It helps them assess their systems more effectively, improve problem-solving, and make better decisions. It can also identify weak points in networks and systems so that organizations are constantly focused on the most critical security tasks. - Enhanced overall security posture: Manually managing the risk of a range of threats, from denial-of-service (DoS) and phishing attacks to ransomware, can be difficult and time-consuming. But with AI, organizations are able to detect various types of attacks in real time and efficiently prioritize and prevent risks. - Better detection and response: Threat detection is a necessary element of data and network protection. AI-enabled cybersecurity can result in rapid detection of untrusted data and more systematic and immediate response to new threats. Future of AI in Cybersecurity AI in cybersecurity is increasingly playing a pivotal role in the fight against more advanced cyber threats. Because AI continually learns from the data it is exposed to, new technologies built on AI processes and techniques are crucial to identifying the latest threats and preventing hackers from exploiting new vulnerabilities in the quickest time possible. How Fortinet Can Help? Fortinet offers AI-powered cybersecurity solutions to protect organizations against known and emerging cyber threats. FortiAI, which is a deep-learning solution designed specifically to remove the need for time-consuming manual investigation of cyberattacks, enables organizations to accelerate their responses to advanced threats by identifying and classifying attack vectors in real time and instantaneously blocking them from reaching corporate networks. FortiAI relies on data from FortiGuard Labs, which provides the latest insight into emerging security threats. FortiAI empowers organizations to detect and protect against the millions of threats that FortiGuard Labs discovers every day. How is AI used in cybersecurity? AI in cybersecurity is used to help organizations automatically detect new threats, identify unknown attack vectors, and protect sensitive data. What is artificial intelligence in cybersecurity? AI in cybersecurity is the use of techniques like deep learning, machine learning, and natural language processing to build more automated and intelligent security defenses. How does AI help in cybersecurity? AI in cybersecurity helps discover and mitigate new cyber events and attack vectors. It allows organizations to keep pace with the evolving threat landscape and handle massive volumes of threats. Is AI a benefit or threat to cybersecurity? AI systems are a huge benefit to organizations’ cybersecurity teams, helping them protect their networks from the latest emerging threats in real time. However, it is worth noting that cyber criminals increasingly use the same AI tools to evolve their attack vectors.
<urn:uuid:4a0ee367-4575-46ff-9f1a-5dd2080ca7b8>
CC-MAIN-2022-40
https://www.fortinet.com/resources/cyberglossary/artificial-intelligence-in-cybersecurity?utm_source=blog&utm_medium=+&utm_campaign=artificial-intelligence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00698.warc.gz
en
0.927295
2,118
3.3125
3
Critical infrastructure is becoming more dependent on networks of interconnected devices. For example, only a few decades ago, power grids were essentially operational silos. Today, most grids are closely interlinked — regionally, nationally, and internationally as well as with other industrial sectors. And in contrast to discrete cyberattacks on individual companies, a targeted disruption of critical infrastructure can result in extended supply shortages, power blackouts, public disorder, and other serious consequences. According to the World Economic Forum (WEF), cyberattacks on critical infrastructure posed the fifth-highest economic risk in 2020, and the WEF called the potential for such attacks "the new normal across sectors such as energy, healthcare, and transportation." Another report noted that such attacks can have major spillover effects. Lloyd's and the University of Cambridge's Centre for Risk Studies calculated the prospective economic and insurance costs of a severe cyberattack against America's electricity system could amount to more than $240 billion and possibly more than $1 trillion. Given these potential far-reaching consequences, cyberattacks on critical infrastructure have become a big concern for industry and governments everywhere — and recent events haven't done much to allay these fears. A Worldwide Phenomenon In May 2021, a huge distributed denial-of-service (DDoS) attack crippled large sections of Belgium's Internet services, affecting more than 200 organizations, including government, universities, and research institutes. Even parliamentary debates and committee meetings were stalled since no one could access the online services they needed to participate. A few days later, a ransomware attack shut down the main pipeline carrying gasoline and diesel fuel to the US East Coast. The Colonial Pipeline is America's largest refined-products pipeline. The company says it transports more than 100 million gallons a day of fossil fuels, including gasoline, diesel, jet fuel, and heating oil — or almost half the supply on the East Coast, including supplies for US military facilities. Credit: Ica via Adobe Stock In August 2020, the New Zealand Stock Exchange (NZX) was taken offline for four trading days after an unprecedented volumetric DDoS attack launched through its network service provider. New Zealand's government summoned its national cybersecurity services to investigate, and cyber experts suggested the attacks might have been a dry run of a major attack on other global stock exchanges. In October 2020, Australia's Minister for Home Affairs, Peter Dutton, said his country must be ready to fight back against disastrous and extended cyberattacks on critical infrastructure that could upend whole industries. Obvious Uptick in DDoS Attacks During the pandemic, there's been a huge increase in DDoS attacks, brute-forcing of access credentials, and malware targeting Internet-connected devices. The average cost of DDoS bots has dropped and will probably continue to fall. According to Link11's Q1/2021 DDoS report, the number of attacks witnessed more than doubled, growing 2.3-fold year-over-year. (Disclosure: I'm the COO of Link11.) Unlike ransomware, which must penetrate IT systems before it can wreak havoc, DDoS attacks appeal to cybercriminals because they're a more convenient IT weapon since they don't have to get around multiple security layers to produce the desired ill effects. The FBI has warned that more DDoS attacks are employing amplification techniques to target US organizations after noting a surge in attack attempts after February 2020. The warnings came after other reports of high-profile DDoS attacks. In February, for example, the largest known DDoS attack was aimed at Amazon Web Services. The company's infrastructure was slammed with a jaw-dropping 2.3 Tb/s — or 20.6 million requests per second — assault, Amazon reported. The US Cybersecurity and Infrastructure Security Agency (CISA) also acknowledged the global threat of DDoS attacks. Similarly, in November, New Zealand cybersecurity organization CertNZ issued an alert about emails sent to financial firms that threatened a DDoS attack unless a ransom was paid. Predominantly, cybercriminals are just after money. The threat actors behind the most recent and ongoing ransom DDoS (RDDoS or RDoS) campaign identify themselves as state-backed groups Fancy Bear, Cozy Bear, Lazarus Group, and Armada Collective — although it remains unclear whether that's just been a masquerade to reinforce the hacker's demands. The demanded ransoms ranged between 10 and 20 Bitcoin (roughly worth $100,000 to $225,000 at the time of the attacks), to be paid to different Bitcoin addresses. Mitigating the Risk Critical infrastructure is often more vulnerable to cyberattacks than other sectors. Paying a ransom has ethical implications, will directly aid the hackers' future operations (as noted by the FBI), and will encourage them to hunt other potential victims. Targeted companies are also urged to report any RDoS attacks affecting them to law enforcement. Organizations can't avoid being targeted by denial-of-service attacks, but it's possible to prepare for and potentially reduce the impact should an attack occur. The Australian Cyber Security Centre notes that "preparing for denial-of-service attacks before they occur is by far the best strategy; it is very difficult to respond once they begin and efforts at this stage are unlikely to be effective." However, as the architecture of IT infrastructure evolves, it's getting harder to implement effective local mitigation strategies. Case in point: Network perimeters continue to be weak points because of the increasing use of cloud computing services and devices used for remote work. Also, it is increasingly infeasible to backhaul network traffic, as legitimate users will be banned, too — potentially for hours or days. To minimize the risk of disruption and aim for faster recovery time objectives (RTOs) after an attack, organizations should become more resilient by eliminating human error through stringent automation. These days, solutions based on artificial intelligence and machine learning offer the only viable means of protection against cyberattacks.
<urn:uuid:109b1c0f-3b46-4520-bdc9-d5210ee89818>
CC-MAIN-2022-40
https://www.darkreading.com/attacks-breaches/critical-infrastructure-under-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00698.warc.gz
en
0.952013
1,213
2.765625
3
The photonics research team developed two-dimensional arrays of micro-lasers designed to collectively surpass the power density of a single micro-laser while maintaining the same stability, the Army said Thursday. “The research results out of UPenn mark a significant step towards creating more efficient and fieldable laser sources.,” said James Joseph, program manager at the Army Research Office. A two-laser array can produce two super-modes and quadratically increase its power output while maintaining the coherency and stability required to preserve the data in transit. Light detection and ranging or LiDAR technology allows autonomous systems to optically sense objects. The researchers believe the array holds potential for LiDAR applications. The research team published its findings in the Science peer-reviewed journal.
<urn:uuid:5d9c897a-a610-4a2e-8732-71ea84924dde>
CC-MAIN-2022-40
https://executivegov.com/2021/04/army-finances-academic-micro-laser-research-to-boost-military-communications-james-joseph-quoted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00698.warc.gz
en
0.88594
162
2.765625
3
A team from the University of California has developed an eel-like robot capable of exploring the deep, operating in salt water, and silently propelling itself using artificial muscles. The foot-long robot also looks the part. Because it’s connected via tether to an electronics board above the surface, its body is thin and translucent, rather than full of components. Exploring in silence Following in the wake of a similar marine robotics project from MIT’s CSAIL lab – which disguised an underwater camera as a robotic fish – the University of California team is seeking to find a solution to a problem that marine biologists have grappled with for years. Namely, how to monitor the deep and gather data without disturbing, damaging, or disrupting the ecosystem. Detailing their work in the April 25 issue of Science Robotics, the researchers point out that the majority of scientific underwater vessels are closer to submarines in nature, with rigid structures and electric motors that ensure they are heard coming. Their alternative is a soft robot that can swim in complete silence. Powered by the environment “Instead of propellers, our robot uses soft artificial muscles to move like an eel underwater without making any sound,” said Caleb Christianson, a Ph.D. student at the Jacobs School of Engineering at UC San Diego. The robot eel is able to do that by using the salt water in which it swims to generate enough electrical force to push it forwards. Cables apply an electric current to the salt water around the robot eel, as well as to pouches of water inside its artificial muscles. Electronics inside the robot then push a negative charge through the water outside and positive charges through its internal pouches. The result is power, activating the eel’s artificial muscles. The electrical charges cause the robot’s muscles to bend one way then the other, enough for the eel to glide along underwater. “Our biggest breakthrough was the idea of using the environment as part of our design,” said Michael T Tolley, the paper’s corresponding author and a professor of mechanical engineering at the Jacobs School at UC San Diego. “There will be more steps to creating an efficient, practical, untethered eel robot, but at this point we have proven that it is possible.” “This is, in a way, the softest robot to be developed for underwater exploration,” he added. The next stage for the research team will be to improve the eel’s ballast so that it can dive deeper. They have also been experimenting with fluorescent dye, loading it into the eel’s internal chambers as a prelude to an underwater communication system. Internet of Business says While aerial robotics tends to grab the headlines, alongside humanoid and industrial models and autonomous transport, marine robotics is a thriving area. Among the many applications are environmental monitoring, extreme-weather mapping, tidal studies – to predict the movement of plastic waste, for example – and the surveillance and repair of remote installations. But with the marine environment – on the surface or beneath it – comes a range of additional challenges, such as the need to protect the natural ecosystem from the machines. This forces researchers to get creative with power sources and materials, as this university project has done. An impressive achievement, with more to follow, no doubt.
<urn:uuid:aa48769c-a62f-4f27-9e65-19e450398fb1>
CC-MAIN-2022-40
https://internetofbusiness.com/robotic-eel-artificial-muscles-underwater-exploration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00698.warc.gz
en
0.930478
698
3.609375
4
Google and Microsoft can access user data via extended spellcheck features available in Google Chrome and Microsoft Edge web browsers. Although basic spellcheckers are enabled, features that present this potential privacy risk include Chrome’s Enhanced Spellcheck or Microsoft Editor when manually enabled. According to Summitt, in cases where Chrome Enhanced Spellcheck or Edge’s Microsoft Editor (spellchecker) were enabled, “basically anything” entered into form fields of those browsers was transferred to Google and Microsoft. Form information submitted to Google and Microsoft when using major web browsers such as Chrome and Edge include PII, address, email, date of birth, contact information, bank and payment information and others. It remains unclear what happens to user data once it reaches third-party providers such as Google’s server. Users can, however review if enhanced spellcheck is enabled in their browser by copying and pasting the link “Chrome://settings/?search=Enhanced+Spell+Check” into their address bar. Otto-js also gave tips on how users can protect themselves against this. “Companies can mitigate the risk of sharing their customers’ PII – by adding ‘spellcheck=false’ to all input fields, though this could create problems for users. Alternatively, you could add it to just the form fields with sensitive data. Companies can also remove the ability to ‘show password’.’ That won’t prevent spell-jacking, but it will prevent user passwords from being sent,”otto-js explains. The sources for this piece include an article in BleepingComputer.
<urn:uuid:7b7b7a95-d5fd-4b69-9a07-a274faa70c0b>
CC-MAIN-2022-40
https://www.itworldcanada.com/post/google-and-microsoft-can-access-user-data-via-extended-spellcheck-features
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00698.warc.gz
en
0.90555
375
2.71875
3
According to a study conducted by Information Services Group back, 72% of the companies by the year 2019 will be relying on Robotic Process Automation (RPA) to increase operational efficiency, productivity and increase compliance. Manufacturing companies that traditionally heavily rely on robotics and automation for production are now looking into RPA to transform other departments that may have a number of error-prone, slow or costly processes. The changing landscape within industries such as manufacturing is due to the vast digital transformation brought about by the increasing integration of cutting-edge technologies such as the Internet of Things and artificial intelligence and machine learning as well as advanced automation. Within manufacturing in particular, the use of automated robotic machines on assembly lines has been common practices for many years now, however transferring this kind of automation to other key areas within manufacturing such as accounts receivable, invoice processing and purchase order management has been somewhat difficult up until fairly recently. In this article, we’ll be looking at what robotic process automation is, the role it has within manufacturing and the benefits it brings as well as how it could affect the future of automation within the manufacturing industry. What is Robotic Process Automation? With various different forms of automation slowly making its way into manufacturing, it is easy to see how the framework for robotic process automation came about. Robotic process automation (RPA) is a method of automation that utilizes software robots to interpret and automate certain back-office and operational tasks as well as processes such as procurement, customer communications, and inventory management. The replication of how tasks are performed within a process-related application or graphical user interface (GUI) are key to robotic process automation. With this capability, there are a number of potential use cases for robotic process automation within manufacturing. RPA for administrative tasks and reporting would also unburden human employees of mundane and repetitive work involved in such duties. RPA can also be scaled up or down depending on the requirement and there are several ways in which it could be integrated within a manufacturing environment depending on what one hoped to achieve through utilizing it. As we’ll now see, there are several benefits that robotic process automation can bring to manufacturing. Key Drivers of RPA Adoption One of the most attractive benefits any technological advancement can bring to almost every industry is extending the original capabilities of the entity adopting and integrating it into their systems. With robotic process automation, manufacturers are discovering that what previously needed three employees working all day to achieve could be done by a single RPA system. As well as cutting down on the time it takes to perform certain tasks and duties, robotic process automation can also reduce the long-term costs associated with various different aspects of manufacturing including bookkeeping, maintenance and upkeep, manual labor, insurance, and reporting and administrative tasks and operations. The implementation of robotic process automation is often used for reducing the complexity of various different back office or managerial processes and therefore increasing a manufacturer’s overall agility. By streamlining specific, user-configured operations, robotic process automation can also enable manufacturers to stay compliant with new and incoming regulations. Customization is another big pulling factor when researching a potential investment into robotic process automation. RPA systems are able to be deployed for both manufacturing utilities as well as other rule-based routines or operations, making them highly customizable in order to suit customer’s needs. Another key benefit to the use of robotic process automation systems is their superiority to humans. Manufacturers would be able to run these systems 24 hours a day seven days a week due to their not needing a break, food, sleep or rest. RPA systems would also be less prone to error and less likely to allow their judgement to be clouded by external stimuli or distractions. The Future of Automation in Manufacturing? As we continue on through the Fourth Industrial Revolution, the use of digital, virtual, artificial intelligence, and automation will no doubt become increasingly common within a manufacturing environment. Robotic process automation, while still currently in relative infancy, will no doubt go on to be an integral element in the manufacturing process. The transformation of traditional manual process into new, much more efficient digital systems is enabling manufacturers to not only improve the quality of their products and services but also helping to eliminate many of the errors and speed up the pace of various different processes and operations. The benefits of these systems are currently being experienced by manufacturers and other industrial and commercial enterprises around the world and the continued development of such systems will likely result in an increase in both levels of quality and productivity within the organisations that adopt them. With more and more aspects of manufacturing becoming automated, the question still remains as to how far technologies such as robotic process automation will go? Many different companies within a variety of industries are increasingly turning to automation as a way to enhance their internal processes and operations and also as a way of modernizing their companies to adapt to the next digital disruption that awaits us.
<urn:uuid:6be7f841-95f0-4517-85f6-b4833065dd6e>
CC-MAIN-2022-40
https://www.lanner-america.com/blog/manufacturing-digitalization-rpa-key-enabler-industry-4-0/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00698.warc.gz
en
0.960808
997
2.609375
3
The United States' intelligence agencies are meant to keep us safe from threats internal and external. But sometimes they have missed the mark and unintentionally put American citizens and businesses at greater risk. Whether this was through the development of cyber weapons that fell into the wrong hands or massive data leaks, it goes to show that no one's cybersecurity is perfect; even in the highest levels of the government. But no matter who's to blame, cyberattack insurance from CyberPolicy can financially support your organization after sustaining a malicious cyberattack or crippling data breach. Don't go it alone, invest in cyberattack insurance today! In 2013, former National Security Agency (NSA) contractor Edward Snowden leaked classified information pertaining to the agency's warrantless mass surveillance of American citizens. This revelation incited a nation debate about the morality and efficacy of such a program, and whether privacy or security should be prioritized as a protection of American freedom. However, research shows that domestic surveillance has "had no discernible impact" on preventing terrorist attacks; the NSA program has been used inappropriately by employees to spy on love interests, spouses and ex-lovers. Not exactly model behavior from an agency dedicated to protecting American cybersecurity. In March 2017, WikiLeaks published "Vault 7," a trove of hacking tools allegedly stolen from the CIA. This document contained many eye-catching hacks including digital eavesdropping through Samsung smart TVs, compromising web routers and obtaining remote command of smart cars for possible assassination attempts. Still, one of the most alarming revelations was the CIA's method of circumventing encrypted messengers. Encryption works to secure message transfers with secret keys which can only be deciphered by the intended recipient. As of yet, there is no way to read these encrypted messages without authorization, especially since the keys modify constantly to prevent incursion. The CIA, however, found a way to read text messages before they are encrypted and sent, thereby eradicating the efficacy of such security features. If this falls into the wrong hands, hackers could likely read all messages and intercept all data transfers coming to and from businesses. Employee Data Leaks In early 2016, a hacker dumped the names, titles, email addresses and phone numbers of 9,000 Department of Homeland Security (DHS) employees, putting 20,000 FBI employees at risk of exposure as well. The hacker told Motherboard that he also stole military emails and credit card numbers, although this was never proven. How'd he do it? The hacker was able to access the files thorough a compromised email account of a single Department of Justice employee. This hack may not affect your business directly, but it does go to show that simple email scams can have big consequences; and not even the FBI is immune to such attacks. If you want to avoid a similar fate, train your employees to fend off suspicious emails, attachments, links and downloads whenever possible. As you can see, intelligence agencies have a difficult job that tries to balance their own need for privacy against the need to infiltrate other actors' defenses. When things get out of hand, it's bad news for business. Make the smart decision and visit CyberPolicy for your free cyberattack insurance quote today!
<urn:uuid:93aeb06e-ac0f-4342-ab24-7521d6a80aff>
CC-MAIN-2022-40
https://www.cyberpolicy.com/cybersecurity-education/3-times-intelligence-agencies-have-botched-cybersecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00098.warc.gz
en
0.945235
655
2.5625
3
For the first time in its 60 years history, the opposition recently won an election in the Democratic Republic of Congo (DRC). Whilst it is too early to state whether or not the transition of power has been peaceful, let alone successful, this could mark a watershed moment in one of Africa’s key economies. In fact, the incumbent, Joseph Kabila, did not even run, let alone contest the result. However, all may not be as it seems – Martin Fayulu, the other frontrunner held a comfortable 20-point lead over Felix Tshisekedi, the supposed victor in pre-election polls. Rumours abound of voter fraud and a secret power sharing agreement between Kabila and Tshisekedi. Doubt also clouds the results as the Independent National Electoral Commission announced that voters in the cities of Butembo and Beni, opposition strongholds struggling with an outbreak of Ebola, would not cast their votes until March. The town of Yumbi also had elections postponed due to violence. It is unclear how those late votes might affect the result, but there are enough registered voters to change the result, potentially causing a crisis of legitimacy and further unrest. This ‘first’ for the DRC is not an anomaly in Africa’s long road to democracy, but part of a growing trend across the continent – a transition that has not come at the barrel of a gun. Statistically, coups are becoming decidedly less frequent, having been a hallmark of African politics in the post-colonial era. Between 1960 and 1999 there were between 39 and 42 coups every decade. In contrast, there were 22 between 2000 and 2009 and there have been 17 since 2010, including the very recent attempt to remove President Ali Bongo from power in Gabon. The DRC follows the 2015 Nigerian election, when Goodluck Jonathan became the first incumbent to concede defeat – a huge moment for the most populous country in Africa and the one with the largest GDP. Western companies and governments will look on Kinshasa with interest in the coming weeks as the democratic process of the country with the largest cobalt reserves in the world unfolds. The price of cobalt has rocketed over the past few years, with experts predicting shortages from 2022 onwards. The mineral is a vital component in the lithium-ion batteries that power phones, devices, electric cars and super-alloys, used to make turbine blades for gas turbines and jet engines. Naturally, this is a material critical to our future technologies and economic security. In the past, cobalt was a by-product, and subject to the price fluctuations of the other, more valuable minerals usually extracted with it, namely nickel and copper. However, demand now means it is lucrative in its own right. The rise in demand and price means companies and governments are more concerned by the reliability and sustainability of cobalt sources. And so, the DRC, with 60% of the worlds cobalt supply, now matters. Mining companies moving into the cobalt market are naturally concerned by the instability of the DRC and the reliability of its exports, both in terms of production and the unethical manner in which these materials are mined. Much depends on the direction the country takes going forward, and whether it can evolve in to a country companies are willing to do business in. Central Africa has, to date, had little success in this regard. Glencore, the London-based mining giant, has run in to significant political and economic problems in recent months in the DRC, including three court cases and a bribery probe. In March, the DRC almost tripled the amount mining companies must pay in royalties on cobalt to the state, to 10% for every tonne sold of the newly declared ‘strategic mineral’. Glencore have just written off $5.6 billion in debt to Gécamines, the DRC state-owned mining corporation, and paid $150 million and $41 million to the DRC to settle ‘historical disputes’ and ‘exploration expenses’ respectively. Glencore is also being sued by a friend of former President Kabila, Dan Gertler, an Israeli businessman sanctioned by the US for acquiring “his fortune through hundreds of millions of dollars’ worth of opaque and corrupt mining and oil deals\\\”. They have already been heavily fined in Canada for this business relationship. Clearly, the energy market in the DRC presents a risky, but perhaps necessary, challenge to any foreign company looking to capitalise on the surge in demand for natural resources, especially when one considers the investment and interest coming from China, who are looking to dominate the electric car market in the years to come. Of course, the Chinese are arguably less concerned with global niceties in the quest for precious resources, soft power and influence. Western companies should be seriously concerned by this new ‘resource war’, knowing that they risk being squeezed out by simply being appropriately cautious. A fine and well-informed line must be trod. Foreign companies must consider what levels of political instability, corruption and economic uncertainty are tolerable to their long-term goals. For now, with the value of cobalt high, this may be an inconvenient, yet necessary, thorn in their side. In return, they will want to see value for money, stability and security of their investment. By the time Africa has matured democratically however, the competition for resources may well be over.
<urn:uuid:9fb10e9c-bd6b-4592-9668-6d167f247112>
CC-MAIN-2022-40
https://kcsgroup.com/transitions-in-cobalt-country/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00098.warc.gz
en
0.965011
1,117
2.5625
3
Although a top-down approach is preferred to the bottom-up method, both have associated advantages and disadvantages. The lists below take a look at some of the relative advantages and disadvantages of each method. Top-down Network Design: Advantages: Begins with a focus on an organization’s specific goals and requirements for network applications and services, while allowing potential future needs to be considered and accounted for. Disadvantages: Requires thorough initial needs analysis in order to determine specific requirements, and ensure that all possible applications and services have been considered. Bottom-up Network Design: Advantages: Generally a faster approach based on past projects and implementations that works within an existing environment. Disadvantages: The approach may not take all necessary applications and services into consideration, leading to a design that ultimately may not meet the needs of an organization, and may need to be redesigned in the future.
<urn:uuid:654f0325-5f24-4890-8a52-86be7ba2d0fc>
CC-MAIN-2022-40
https://www.2000trainers.com/ccda-study-guide/comparing-network-design-approaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00098.warc.gz
en
0.937782
185
2.859375
3
National Cybersecurity Awareness Month (NCSAM) Begins National Cyber Security Awareness Month (NCSAM) – observed every October – was created as a collaborative effort between government and industry to ensure every American has the resources they need to stay safer and more secure online. Since its inception under leadership from the U.S. Department of Homeland Security and the National Cyber Security Alliance, NCSAM has grown exponentially, reaching consumers, small and medium-sized businesses, corporations, educational institutions and young people across the nation. 2018 marks the 15th year of National Cyber Security Awareness Month. NCSAM 2018 Themes: Week 1: Oct. 1–5: Make Your Home a Haven for Online Safety; Week 2: Oct. 8–12: Millions of Rewarding Jobs: Educating for a Career in Cybersecurity; Week 3: Oct. 15–19: It’s Everyone’s Job to Ensure Online Safety at Work; Week 4: Oct. 22–26: Safeguarding the Nation’s Critical Infrastructure.
<urn:uuid:2bb4db4f-fc50-4d8a-8e3d-2a83a3208488>
CC-MAIN-2022-40
https://cyware.com/cyber-security-events/others/national-cybersecurity-awareness-month-ncsam-begins-8cc44cee/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00098.warc.gz
en
0.929376
211
3.015625
3
World Password Day 2022: Gone Are the Days of Password123 In honor of World Password Day, the first Thursday of May every year, cybersecurity professionals strive to provide new insights and tips to help individuals and organizations keep their valuable data safe from relentless cybercriminals. As the pivotal first line of defense for our online accounts, systems, and networks, passwords play a critical role in the privacy and security of our digital ecosystems. Unfortunately, passwords are easy to share, steal – and increasingly, easy to guess or crack via so-called “brute force” attacks. Once compromised, passwords can provide digital thieves with a virtual passport to your most sensitive data and systems. Choosing a new password is a task often performed in haste, but which deserves careful consideration. Here are seven quick tips for better password protection: - Screen Passwords for Compromises: Check to see if a password has been previously compromised before using it. One good resource to vet passwords is the “known compromised” password corpus, Have I Been Pwned? - Forget What You Know About Complexity Rules: Users tend to formulate predictable passwords. The National Institute of Standards and Technology (NIST) recommends screening and blacklisting previously breached passwords and avoiding repetitive or sequential characters (“aaaaaa” or “1234abcd”), or context-specific words, like the name of the service, the user’s name, and derivatives. - Don't Worry About Periodic Resets: The use of password expiration is losing favor as research shows it doesn't really do much for security. NIST suggests avoiding forced periodic resets, but strongly recommends changing passwords with any evidence of compromise. - Use Lengthy Passphrases: Longer passwords are harder to crack or guess, surpassing the effectiveness of any other complexity rule. NIST recommends passwords of at least eight characters, and organizations should strive for longer minimums, as well as designing systems that accept passwords with as many as 64 characters to encourage users to utilize passphrases. - Enable MFA (When Available): With the pandemic causing more people to work remotely, more organizations have been investing in and adopting multi-factor authentication (MFA), one of the best ways to mitigate the risk of passwords, and a common component of zero trust architectures. A problem that arises, however, isn’t that MFA is not available, but that users and administrators don’t take advantage of it when it’s already there. One study found 78% of Microsoft® 365 administrators do not have MFA activated within their environments. - Give Users a Password Manager: When MFA is unavailable or not being utilized, password managers can provide a great middle ground for managing risks when passwords are sole forms of authentication. Password managers automatically create longer passwords with complex, random strings of characters every time a user creates a new account – yet the user only has to remember a single passphrase. - Double-Check Practices for Choosing Master Passwords: When using a password manager, it’s advisable that master passwords be thoroughly hardened. NIST suggests choosing long passphrases for master passwords, storing them offline, and avoiding password managers that allow recovery of the master password. Tired of Pesky Security Prompts That Keep Making You Prove That “You’re Still You”? In addition to the password enhancement tips provided above, solutions such as CylancePERSONA™ provide increased protection that can lead to a more password-friendly future. CylancePERSONA provides continuous authentication with machine learning (ML) and predictive artificial intelligence (AI) to dynamically adapt a security policy based on user location, device, and other factors and protect against human error and well-intentioned workarounds. How CylancePERSONA Works - User Location - CylancePERSONA uses ML and predictive AI to identify behavioral and location patterns of multiple users to determine risk. Known work locations can also be preloaded. - Network Trust - CylancePERSONA determines the frequency of network use and adjusts security dynamically based on profiles. For example, accessing a public Wi-Fi for the first time would adjust the risk score accordingly. - User Behavior – This soon-to-be-released feature of CylancePERSONA will add the ability to determine and build a contextual risk score, based on how and when a user normally accesses data. The feature works by identifying when the user’s behavior seems consistent and trustworthy. - Device and App DNA – Another soon-to-be-available feature, this capability will enable CylancePERSONA to determine whether a specific device or set of applications are compliant and up to date, adjusting security policy based on device and app “DNA” profiles. As we mark the 10th World Password Day, initially started by Intel in 2013, it’s a great reminder to check for potential areas of vulnerability and explore options to keep systems and networks safe from inherent insecurities and threats. Passwords persist because they're easy to use, easy to design logins around, and are a well-known and well-tolerated security measure. While they continue to reign as the primary (and often sole) form of authentication, there are smart steps to take for better password protection. Many organizations, encouraged by analyst findings such as the Forrester Research Report indicating 70% of enterprises are still password-centric, have added stronger layers of protection with the enablement of MFA and zero trust. In addition to the recommendations of NIST and other trusted cybersecurity advisors, solutions such as CylancePERSONA can offer an alternative method of protection to help keep organizations secure. Visit BlackBerry to learn more about CylancePERSONA, or to schedule a free trial.
<urn:uuid:7594aecb-0874-4c66-b417-bf0d5be06aac>
CC-MAIN-2022-40
https://blogs.blackberry.com/en/2022/05/world-password-day-2022-gone-are-the-days-of-password123
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00098.warc.gz
en
0.914235
1,212
2.75
3
The U.S. Navy is developing new software programs to make its ocean-going robots more independent. By the end of the decade, the Navy plans to deploy squadrons of unmanned underwater robots to survey the ocean. But there are a lot of challenges operating underwater, and the robots will require a great deal of autonomy to carry out search and mapping missions. That’s the goal of the Adaptive Networks for Threat and Intrusion Detection or Termination (ANTIDOTE) program. Funded by the Office of Naval Research (ONR), ANTIDOTE’s team of scientists from the Massachusetts Institute of Technology and the University of Southern California is developing software-based methods for large teams of robots to perform more sophisticated missions autonomously in dynamic, time-critical environments and with limited communications. A major part of the program focuses on autonomous planning and replanning methods, said Marc Steinberg, ANTIDOTE’s program officer at ONR. The underlying theory behind ANTIDOTE was for persistent surveillance of dynamic environments, regardless of the vehicle type carrying out the mission, Steinberg said. For example, some successful simulation experiments have been conducted with unmanned air systems. Undersea vehicles have very limited communications compared with systems that operate above ground, Steinberg said. This drives a need for autonomy because the robot subs can’t rely on a human operator. Additionally, there are unique challenges in navigation, mobility and sensing underwater. For example, the undersea glider robots in ANTIDOTE's experiments use changes in buoyancy for propulsion, rather than an active device such as a propeller. This enables them to have an extended endurance, but it also requires that gliders move up and down in depth in a saw-tooth-like pattern, which has a big impact on how to do autonomous planning to maximize the value of the scientific data being collected. “The sea experiments were a great way to examine how some promising theoretical results would work in a real-world situation of practical value to scientists,” Steinberg said. Prior theoretical work had looked at how autonomous vehicles can best perform persistent surveillance in a dynamic environment. In the sea tests, the new software was used to generate paths for the underwater gliders to collect oceanographic data. The method takes into account both user priorities and ocean currents in determining these paths. The experiments, in southern California and in Monterey Bay, Calif., involved a glider using this new capability and a reference glider that followed a more traditional fixed path. Results of the experiment showed that the vehicle using the new method executed two to four times as many sampling profiles in areas of high interest when compared against the unmodified reference glider, while maintaining an overall time constraint for the completion of each circuit of the path, ANTIDOTE researchers said in a statement. Overall, the results validate that the theoretical results can be of value in solving real-world surveillance problems with autonomous systems, Steinberg said. The ANTIDOTE program is near the end of its third year. After that, Steinberg said that it is up to ONR leadership to decide whether to fund it for an additional two years. “As a fundamental research program, the main products are new theory and methods," he said. "Some of these are being implemented in software and will be available to other researchers via open architectures for robots.”
<urn:uuid:085bc821-a23a-4049-8356-085fe865d144>
CC-MAIN-2022-40
https://gcn.com/emerging-tech/2011/12/navys-undersea-robots-get-smarter/283229/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00098.warc.gz
en
0.93683
706
3.046875
3
Data structures can be provided as schemas during activity configuration or they can be defined within the transformation itself. When data structures are provided in an activity, the schemas are inherited by the transformation using the activity as a source or target in the operation. Once the source and target schemas of a transformation are defined, you create transformation mappings between the source and target schemas to define how data should be processed. Data Structure Types A flat data structure consists of one or more single fields and records in a two-dimensional structure. Examples include CSV files, simple XML files, and single database tables. A flat data structure is also referred to as a flat file structure. A hierarchical data structure has one or more parent-child or nested relationships between fields and records in a complex structure. A hierarchical data structure is sometimes referred to as a relational, multilevel, complex data, or tree structure. Display of Data Structures Data structures are displayed in a tree format that can be expanded and collapsed to show either the entire tree or just a portion of it. Each tree consists of nodes and fields, where fields within the source data structure can be mapped to fields within the target data structure. Nodes have a disclosure triangle to the left of the node name that is used to collapse or expand the node. By default, nodes are expanded up to 8 levels deep for schemas with 750 or fewer nodes and up to 5 levels deep for schemas with more than 750 nodes. All nodes beneath a target node can be expanded at once using the schema actions menu option Expand All Nodes Beneath This Node (see Target Nodes in Mapping Mode). If you expand or collapse nodes, Cloud Studio remembers the last expansion state you were using the next time you access the transformation. Once expanded, nodes display any contained child nodes and fields. Nodes can be considered as folders with child nodes as sub-folders. Fields are contained within nodes and are listed with their data type ( For example, in the target structure shown below, the node json includes the child node item, which contains the fields title. The node item also contains the child node employeeDetails, which contains the fields Display of Mapped Fields A transformation mapping consists of target fields or nodes and their corresponding scripts. These scripts may contain references to source fields or nodes or to project components, use functions, or contain other valid script logic. A mapping does not include target fields that are not mapped. When source objects and variables are defined within the target field, they appear as blocks within the target field. The mapped target field is displayed with a purple vertical line along the left of the target field block: When both a source and a target schema are visible on the screen and you are in mapping mode, a visual line shows the connection with the source object. Hover over a mapped target field to show a light gray line that connects from the source object(s) used in the mapping to the mapped target field: The solid black line shown in the above image is explained in the next section, Loop Nodes. The target side of the mapping also indicates if a field has any default values (outlined in red in the image below) or joins (outlined in green in the image below). For example, this transformation inserts data into a database whose id fields are auto-incremented and whose created_at field is set equal to the current time by default. It also shows that the child table qa_employee has been joined on the id field to its parent table If a collapsed node contains target field mappings, that node is shown in bold to indicate it contains mappings: A loop node is a source or target node with repeating data values, such as line items in an invoice or a set of customer records. When loop node fields are mapped, a solid black iterator line automatically appears, indicating that the transformation process will loop through the source data set. The location of the generated iterator lines depends on the multiplicity of the corresponding source loop nodes. A transformation can have zero or more iterator lines. When multiple iterator lines are present, precedence is given from top to bottom of the target structure. To toggle the display of an individual iterator line, click directly on the circle shape that is closest to the target node: The individual loop node line then becomes an orange stub that when clicked again displays the full line: As an example of a loop node mapping, consider the following hierarchical source structure containing a top-level source node ( item) with fields that provide information about a company. A child source node, locationDetails, includes an array ( json$item.locationDetails$item.) of objects with fields for multiple store locations within a company. Both the parent and child node are considered loop nodes because the data may contain multiple company records with multiple store location records for each company. Now consider that this data is being mapped to a flat target structure, resulting in a record for each store location. As you map fields, an iterator line automatically appears connecting source and target loop nodes. This line indicates that the target will loop as many times as there are repeating sets of data in the source, or in this example will loop through each store location record for each company. Mapping from a Multi-Instance Source to Single-Instance Target When the generated target loop node depends on more than one source loop node, you may need to resolve a multiple occurrence conflict with the mapping. If the source data structure is a multi-object array and is being mapped to a target data structure with a single object, this dialog is displayed: To use the first instance of the source in the mapping, select Yes. This means that only the first record will be mapped. For example, given the following mapping, only the first customer record in the array is mapped to the target structure containing only a single customer. Notice that each mapped target field now contains a script as indicated with the script icon . When you toggle to script mode for any mapped field, you will see that a #1 has been added within the path of the mapped source object to indicate that the first instance is mapped. If you do not want the first instance of the source to be used, you can specify other logic using the instance-resolving functions (see Instance Functions). If you are mapping data from a flat structure to a hierarchical structure, the data may need to be normalized before being transformed. By default, Jitterbit Harmony uses a normalization algorithm to construct the target tree. This will convert the flat structure of the source into a hierarchical source structure that can then be mapped to the hierarchical target structure. In the target structure, the root element and all of the multiple-instance elements under the root are used to create the structure of secondary source elements. The attributes (or fields) of these secondary source elements are flat data elements that are then used in the mappings of the corresponding target element. With the source structure properly defined, the normalization process is simplified to combining nodes with the same parents. There are three options for normalization: - Complete Normalization: All the elements with the same parent and all the fields are reduced to one element. (This is the default.) - Partial Normalization: The same as complete normalization, except that the lowest children are not normalized. - No Normalization: Each flat record creates a branch of elements; no reduction of elements is performed when creating the hierarchical source structure. It is possible for the hierarchical structure to contain a single instance node. In that case, only the first element for this root will be kept, and flat records that conflict with this root data node will be ignored. To disable normalization, set the Jitterbit variable true (see Transformation Jitterbit Variables). Instance and Multiple Mapping Transformation mapping is the process used to define the relationship of data between inputs and a resulting output of data. Depending on which data structure types are used, the transformation mapping may be described as instance mapping or multiple mapping. Instance mapping describes when the mapping of a target instance depends on possibly more than one instance of a source. Instance mapping can be either flat-to-flat (one-to-one) or hierarchical-to-flat (many-to-one). Multiple mapping describes the mapping of two hierarchical data structures or the mapping from a single, flat structure that is actually hierarchical in nature, with its lower segments containing multiple sets of values such as name/value pairs. Multiple mapping can be either hierarchical-to-hierarchical (many-to-many) or flat-to-hierarchical (one-to-many). Sample situations for both instance and multiple mapping can be found within the Design Studio documentation: - Instance Mapping - Multiple Mapping Although these examples are for Design Studio, the same concepts can be applied in Cloud Studio. For hands-on training modules that have examples of mapping simple and complex database, text, and XML files, see Introduction to Jitterbit Harmony Cloud Studio. - No labels
<urn:uuid:88d9dcf8-f3ba-4dc2-81e9-545f75df5eb2>
CC-MAIN-2022-40
https://supportcentral.jitterbit.com/display/CS/Data+Structures
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00098.warc.gz
en
0.86604
1,968
2.640625
3
Researchers at the University of Surrey have successfully demonstrated proof-of-concept of using their multimodal transistor (MMT) in artificial neural networks, which mimic the human brain. This is an important step towards using thin-film transistors as artificial intelligence hardware and moves edge computing forward, with the prospect of reducing power needs and improving efficiency, rather than relying solely on computer chips. The MMT, first reported by Surrey researchers in 2020, overcomes long-standing challenges associated with transistors and can perform the same operations as more complex circuits. This latest research, published in the peer-reviewed journal Scientific Reports, uses mathematical modelling to prove the concept of using MMTs in artificial intelligence systems, which is a vital step towards manufacturing. Using measured and simulated transistor data, the researchers show that well-designed multimodal transistors could operate robustly as rectified linear unit-type (ReLU) activations in artificial neural networks, achieving practically identical classification accuracy as pure ReLU implementations. They used both measured and simulated MMT data to train an artificial neural network to identify handwritten numbers and compared the results with the built-in ReLU of the software. The results confirmed the potential of MMT devices for thin-film decision and classification circuits. The same approach could be used in more complex AI systems. Unusually, the research was led by Surrey undergraduate Isin Pesch, who worked on the project during the final year research module of her BEng (Hons) in Electronic Engineering with Nanotechnology. Covid meant she had to study remotely from her home in Turkey, but she still managed to spearhead the development, complemented by an international research team, which also included collaborators in the University of Rennes, France and UCL, London. Isin Pesch, lead author of the paper, which was written before she graduated in July 2021, said: “There is a great need for technological improvements to support the growth of low cost, large area electronics which were shown to be used in artificial intelligence applications. Thin-film transistors have a role to play in enabling high processing power with low resource use. We can now see that MMTs, a unique type of thin-film transistor … have the reliability and uniformity needed to fulfil this role.”
<urn:uuid:224c618e-d968-45a8-9f0c-ad76d0a61888>
CC-MAIN-2022-40
https://www.architectureandgovernance.com/app-tech/edge-processing-research-takes-discovery-closer-to-use-in-artificial-intelligence-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00098.warc.gz
en
0.954107
468
3.171875
3
Spam. We’ve all seen enough of it. But just as familiarity has bred contempt (and stopped most email users responding to it), spammers have come up with a new technique to snare the unwary and get around corporate security measures. The spammers´ latest technique involves image spam – emails that contain little more than an image embedded into the body of the message. The image, of course, contains the spam message that you hoped to avoid. So what is image spam? How much of a threat is it? And more importantly, how can you ensure that your staff are protected against that threat? Image spam – a snapshot With image spam, the content is simply contained within an image embedded into the message body, in an attempt to bypass filtering protection layers. This type of spam has a much greater drain on network and bandwidth resources than text spam, because the images mean a larger file size. For example, a “traditional” text spam averages 5kb in size, while an image spam message is 360% larger with a size of 23kb. While 23k is not much on its own, multiply that by the millions of spam emails send every day, and the scale becomes apparent. In the last six months of 2006, 40% of all spam was image-based, which is a doubling in volume over the previous six-month period. This spike is likely due to spammers testing the viability of image spam. Now that format effectiveness has been demonstrated, SurfControl has seen that spammers have converted more of their text-based messages to image spam, and the image spam volumes continue to rise dramatically. Why has this shift happened? Well, just like the classic battle between authors of malicious code and antivirus firms, spammers are investigating new technologies and trying new tricks in an effort to stay one step ahead of spam filters and promote their dubious offerings. After all, for them it’s a business. Many spam filters, especially older or less sophisticated ones, rely upon certain text criteria on which to make judgments. Such filters typically watch for predetermined words in the subject lines of e-mail messages, suspicious word patterns and word frequency. Image spam is not easily stopped by such basic filters, because it contains random words to make it appear legitimate. Know the enemy So what does image spam look like? The message body of a typical image spam e-mail comprises three components. The first component is a short section of random text at the beginning of the message, which is then followed by the second component — the image file, which is typically an image of text with the spam content. The last component is a lengthy section of random text at the bottom of the message body, which attempts to fool unsophisticated spam filters. The images typically don´t contain any clickable links that take the viewer to a website, and it´s unlikely someone would be so enthralled with a spam message that they would type the URL into their browser. So why do spammers use this technique? The majority of these messages are classic “pump and dump” stock scams, where the spammer invests in a stock and then sends out messages hyping the stock, hoping to inspire a quick, profitable run. And it seems that it can work. Some observers have watched the price movements in shares being hyped by image spam, and saw gains of over 25% in the days following a spam campaign. So there is cause and effect at play. Protect against the image So what measures can organisations take to protect their business and their employees against image spam, without impacting on network performance and productivity levels? The most effective protection against image spam and other emerging threats is in a layered approach. By implementing solutions through layered deployments, you yield tremendous savings in network resources, bandwidth and overall administration, while ensuring that business security and compliance requirements are most efficiently met. The first security measure against spam is tried-and-tested end user awareness. By making sure your users understand the risk of responding to spam and phishing attempts, you’ll reduce the impact of spam on your network and business operations. But user awareness should be backed up by enforcement, too. The advent of image spam is causing many problems through its ability to defeat many traditional e-mail filters. You should review your Internet security and understand what technologies your vendor is using to protect your organisation. Today’s most effective and sophisticated solutions combine a variety of intelligent image spam detection technologies, including a heuristics engine, a reputation service, and targeted Optical Character Recognition (OCR) technology. Layering these advanced technologies with deployment on the network and an in-the-cloud email filtering solution provides the greatest level of protection. In-the-cloud filtering removes large volumes of spam before it reaches the gateway protecting valuable network resources as well as maximising the overall image spam detection rate. There’s no one-stop point solution which will protect your organisation against all incoming spam. It therefore makes sense that by layering a number of different solutions, you’ll succeed in creating a more robust, comprehensive filtering solution which will minimise the threat of spam emails entering the network. A layered approach also offers greater control over inbound and outbound email, with the ability to closely manage traffic. With the rise in the number of spam emails, many organisations are choosing to add an additional email filter layer in the cloud, in order to filter out the vast majority of spam before it hits the network. In this way, organisations can free up bandwidth, reduce ISP costs, and remove the expense of having to upgrade servers to cope with additional load. For businesses which need to perform deeper content inspection for confidential data management or compliance regulations and in order to prevent sensitive and confidential information leaving the network, adding another security layer using an appliance-based solution provides granular content filtering. Plan today for the threats of tomorrow Image spam is just the latest threat that is damaging e-mail performance as a business critical tool. Its ability to defeat many traditional e-mail filters has raised the issue of reviewing e-mail security, and there are options available that can improve the capture rate of image spam. However, in any review of security technology an organisation must look ahead and ensure that a solution not only solves today’s issues, but has the technology and deployment model to protect against tomorrow’s unknown threats too.
<urn:uuid:60e6fb99-d282-465f-af93-819e7a129065>
CC-MAIN-2022-40
https://it-observer.com/image-spam-getting-picture.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00098.warc.gz
en
0.944146
1,321
2.59375
3
We are living in a connected world. The Internet is in nearly everybody’s life and in the palms of their hands. In the first quarter of 2014, smartphone sales hit 281.5 million units, rising 28.6 percent Q/Q.1 The number of mobile devices connected to the Internet exceeded the global population in 2014, and will continue to grow from there. The amount of data consumers use is growing as well. Some sources claim that data traffic will increase eleven fold between 2013 and 2018.2 The Internet is also infiltrating other aspects of our lives. Smart cars, smart glasses, and even smart TVs are available today. And although wireless technology is nothing new, it highlights how we have adapted from a life with the Internet as a luxury into a life with it constantly running in the background. This concept that many parts of our lives can now be controlled wirelessly is called the “Internet of Things.” How exactly will the Internet handle all of the data transportation required? Through Transmission Control Protocol (TCP), the key transport protocol of the Internet infrastructure. TCP is the essential glue, which together with Internet Protocol (IP), ensures that all applications connect smoothly to our devices. It allows us to share resources with billions of people, all over the world, at the same time. It also establishes and manages traffic connections and congestion while taking care of transmission errors. TCP has many moving parts, with new ones being added every day. Without the proper tuning and combination of these parts, TCP can hurt more than it helps in optimizing network use. Now F5 has created a framework to tune and adjust the parameters of TCP to enhance the connections and subscriber experience. Initially, TCP had very few configurable parameters. When it was designed in 1973, during the infancy of the Internet, it was made for a wired infrastructure—the Advanced Research Projects Agency (ARPA) Net. The ARPA Net was a low-capacity network of 213 computers for the purpose of sharing knowledge among some of the world’s leading research institutions at the time; thus, the design of the network and protocols was very different from what we use today. Beginning in 1986 after 1G technology was released, the Internet began to experience “congestion collapses” where the transmission rates of the networks dropped by a thousand fold from 32 Kbps to 40 bps. This drastic drop in rates led to some investigation and analysis by leading computer scientists including Van Jacobson, who helped create what we now know as congestion control algorithms. These algorithms are methods that allow a TCP stack to alter how it treats data based on network conditions. The Internet has followed the trend of most technologies still alive from the early ’70s—advancing at a rate nobody could imagine. Now, with the rise of smartphones, we are using mobile networks such as 3G and 4G, and high-capacity, fixed-line networks. Needless to say, these networks have very different characteristics than their ancestral networks. As the Internet has progressed, user experience has always been the most important factor. The new breadth of access technologies leads to a wide spread of network characteristics. Recently, network access has shifted from wired networks to 3G and 4G cellular networks. |Network||Base Latency||Base Download Speeds||Buffer Sizes||Characteristics| |3G (released early 2000s)||100–500 ms||21–42 Mbps||Small||High packet loss, even without congestion.| |4G (late 2000s)||50 ms||Up to 300 Mbps||Larger than 3G||Lower packet loss due to error correction. Increased latency due to buffer sizes and not necessarily congestion.| Modern network traffic is harder to control than it was in the 1980s because packet loss does not necessarily mean congestion in the networks, and congestion does not necessarily mean packet loss. As shown in figure 1, 3G and 4G networks both exhibit different types of behavior based on their characteristics, but a server may view the different aspects as congestion. This means that an algorithm cannot only focus on packet loss or latency for determining congestion. Other modern access technologies, such as fiber to the home (FttH) and WiFi, expand upon the characteristics represented above by 3G and 4G, making congestion control even more difficult. With different access technologies having such different characteristics, a variety of congestion control algorithms has been developed in an attempt to accommodate the various networks. The changing network characteristics have led to a simultaneous evolution of congestion control algorithms. Initial algorithms, such as TCP Reno, use packet loss to determine when to reduce the congestion window, which influences the send rate. TCP Reno increases the send rate and congestion window by 1 MSS (maximum segment size) until it perceives packet loss. Once this occurs, TCP Reno slows down and cuts the window in half. However, as established in the previous section, modern networks may have packet loss with no congestion, so this algorithm is not as applicable. The next generation of algorithms is based on bandwidth estimation. These algorithms change the transmission rate depending on the estimated bandwidth at the time of packet loss. TCP Westwood and its successor, TCP Westwood+, are both bandwidth-estimating algorithms, and have higher throughput and better fairness over wireless links when compared to TCP Reno. However, these algorithms do not perform well with smaller buffers or quality of service (QoS) policies. The latest congestion control algorithms are latency-based, which means that they determine how to change the send rate by analyzing changes in round-trip time (RTT). These algorithms attempt to prevent congestion before it begins, thus minimizing queuing delay at the cost of goodput (the amount of useful information transferred per second). An example of latency-based algorithms is TCP Vegas. TCP Vegas is heavily dependent upon an accurate calculation of a base RTT value, which is how it determines the transmission delay of the network when buffers are empty. Using the base RTT, TCP Vegas then estimates the amount of buffering in the network by comparing the base RTT to the current RTT. If the base RTT estimation is too low, the network will not be optimally used; if it is too high, TCP Vegas may overload the network. Also, as mentioned earlier, large latency values do not necessarily mean congestion in some networks, such as 4G. By knowing the traffic characteristics and keeping the current inadequate algorithms in mind, service providers can implement an ideal TCP stack. The ideal TCP stack should achieve one goal: optimizing a subscriber’s QoE. To accomplish this, it must do three things: establish high goodput, minimize buffer bloat, and provide fairness between the flows. High goodput is important for determining if the stack is optimized because it is a measure of how much of the data going through the network is relevant to the client. Goodput is different from throughput, which includes overhead such as unnecessary retransmission and protocol headers. Goodput also addresses the difference between content that was stalled or failed to complete versus content that the consumer was able to utilize. To help with maximizing goodput, TCP needs to address packet loss from interference as well as handle both small and large router buffers. Delay-based algorithms fail when competing with other flows for bandwidth; bandwidth-based algorithms fail when the buffers are too small or when quality of service policies are present in the network; loss-based algorithms fail by incorrectly slowing down for interference-based loss. Buffer bloat occurs when too many packets are buffered, increasing queuing delay and jitter in the network. Buffer bloat leads to performance issues by impacting interactive and real-time applications. It also interferes with the RTT calculation and negatively impacts retransmission behaviors. Thus, minimizing buffer bloat is ideal for an optimized TCP stack. Loss-based algorithms fail to minimize buffer bloat because they react after packets have been lost, which only happens once a buffer has been filled. These algorithms fail to lower the send rate and allow the buffer to drain. Instead, the algorithms choose rates that maintain the filled buffer. Fairness between flows ensures that no one user’s traffic dominates the network to the detriment of other users. Delay-based algorithms fail to fulfill this criteria because loss-based flows will fill all of the buffers. This leads to the delay-based flows backing off and ultimately slowing down to a trickle. The F5 solution accomplishes the goal of the ideal TCP stack. It improves QoE for customers—resulting in less subscriber churn and increased revenue for service providers. High goodput is achieved by maximizing the amount of data sent within a single packet and optimizing how quickly data is sent. The proprietary hybrid loss and latency-based algorithm, named TCP Woodside, is designed to maximize goodput while minimizing buffer bloat. It controls buffer size by constantly monitoring network buffering, and will slow down preemptively when needed—leading to a reduction in packet loss and minimal buffer bloat. However, when the queuing delay is minimal, TCP Woodside will rapidly accelerate to maximize the use of the available bandwidth, even when interference-based packet loss is present. Buffer bloat can be avoided by pacing the flow of data transmitted across the network. By knowing the speed at which different flows are being sent, the stack can control how quickly to send the packets through to the end device. This allows the buffers to adjust up without being overfilled. As a result, inconsistent traffic behaviors and packet loss due to network congestion are prevented. In the figure 3 graphs below, a non-optimized stack’s latency is compared to that of an F5 optimized stack. Both stacks have throughputs of 11 Mbps. In the left graph, the non-optimized stack has an increasing RTT—up to as much as 2.5 seconds—as more packets are sent through the network and the buffer starts to become bloated. However, in the right graph, the optimized stack’s RTT stays around 200 milliseconds even as more packets are sent. This steady RTT time leads to an improved end-user QoE due to less “bursty” traffic, and reduces buffer bloat as well. Not only does rate pacing help with buffer bloat, but it also improves the fairness across flows. Without rate pacing, packets are sent immediately and consecutively. Having two flows at the same time means one flow will see different network conditions than the other flow, usually with respect to congestion. These conditions will affect the behavior of each flow. As shown in the figure 4 left graph below, the flows have different behaviors at different times. Sometimes one flow has more bandwidth and sends more information. However, the next second, another flow may gain that bandwidth and stop the flow of others. Controlling the speed at which packets are sent on a connection allows gaps to occur between packets on any individual flow. Instead of both flows attempting to send consecutive packets that become intermixed, one flow will send a packet, and the second flow can then send another packet within the time gap of the first flow. This behavior changes how the two flows see the network as well. Rather than one flow seeing an open network and the other seeing a congested network, both flows will likely recognize similar congestion conditions and be able to share the bandwidth more efficiently (as shown in the right graph). With TCP Woodside and rate-pacing features working together, live test data shows that performance improves enough to bump subscribers from one category of congestion or signal strength to one category better. In figure 5, an optimized subscriber on heavy congestion receives better performance than the baseline medium congestion, and the optimized medium congestion signal performance is better than the baseline uncongested. The F5 stack, which implements both standardized and proprietary optimizations, accomplishes its goals through two main features: proprietary hybrid loss and latency-based algorithm (TCP Woodside), and rate-pacing capabilities. These features are able to constantly monitor network buffers—sending packets at rates that prevent buffer bloat and improving fairness across flows—while also preemptively slowing down to prevent congestion during heavy traffic. Once traffic lightens up, the algorithm speeds up to maximize the use of available bandwidth. The Internet has gone through many changes since it was initially implemented on 213 fixed-line hosts in the late 1970s. With the number of Internet-connected devices now exceeding the global population, people speculate about the future speed of the Internet. As the world moves toward becoming completely mobile, new technology is being developed to handle the traffic across wireless networks. Though many types of TCP stacks are available, only F5’s properly provides for all three characteristics of an ideal TCP stack: having high goodput, minimizing buffer bloat, and allowing for fairness between flows. In addition to these unique functions, F5’s TCP stack integrates with other F5 solutions. This allows multiple functionalities—including deep packet inspection, traffic steering, and load balancing—to be consolidated onto one platform.
<urn:uuid:f362973a-c48b-4339-a1b6-f1ba320f7932>
CC-MAIN-2022-40
https://www.f5.com/ja_jp/services/resources/white-papers/tcp-optimization-for-service-providers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00098.warc.gz
en
0.946521
2,660
3.578125
4
When building your local area network (LAN), inevitably you’ll need to connect it to a wide area network (WAN). A WAN differs from a LAN. - You probably own the LAN entirely. Every switch, router, and cable is likely to be under your direct control. - You probably do NOT own the WAN. You might own a WAN router such as a Cisco ISR, but from there, the WAN is owned by a service provider, aka a carrier such as Verizon, AT&T, Level 3, Comcast, or CenturyLink. Internet circuits provided by your local ISP are WAN circuits as well. The net effect of you not owning your WAN is that you lease WAN infrastructure on a contractual basis. What is really at the other end of that WAN cable? Let’s consider WAN circuits for a moment. What are they, really? When talking about in-building cabling, a WAN circuit is a cable that connects your equipment to the carrier’s equipment. That cable could be used to signal something you’re familiar with, like straightforward Ethernet. That cable could be used to signal in a way that you’re less familiar with, like T1 (in the US) or E1 — channelized signals that come under the general heading of time division multiplexing (TDM). The cable might be single- or multi-mode fiber optic (probably an Ethernet hand-off) or twisted-pair copper (increasingly Ethernet, but just as likely to be TDM). That cable is handed off to you by the carrier from the demarcation point (demarc). The demarc likely appears as a jack in the wall to you. From there, where does it go? What’s behind the jack? The answer is that it depends. In general, that demarc point runs back to carrier owned equipment nearby. If your business is located in a shared office complex, the carrier might have equipment right on the campus to connect your circuit. If you’re in your own building or in a residential neighborhood, the cable might run out to a utility pole. Eventually, your WAN circuit will land at a central office. Now, it’s at this point that we have to make a key observation. The connection between your office and the central office is what we call the last mile. The last mile, at least in the US, has one big problem: it’s often owned by a single organization. That owner is likely to be the incumbent local exchange carrier (ILEC). You might be buying a WAN circuit from, say, CenturyLink. But for CenturyLink to get that circuit to you, at some point, they have to deal with whoever owns the last mile, often the incumbent local exchange carrier (ILEC). You won’t have to deal with that in the sense that you won’t get a bill from the last mile owner. But the fact that the last mile is very possibly owned by a single entity presents a concern when designing WAN infrastructure for your business. Considerations for last mile design. One instinct most network engineers will have is to use diverse carriers to improve WAN resilience. The idea is that if one of your carriers has a problem, the second one will not. Add dynamic routing, and voila: business continuity. The logic is that at least one of the two circuits will be up at any given time. A common enterprise business strategy is to lease a private (expensive) WAN to connect all of their remote offices together, and use public (cheap) Internet circuits secured with VPN as a failover. Another strategy used by those companies able to justify the expense is to create two private WANs, each running in parallel. The comparatively cheap “Internet plus VPN” circuit is used as a tertiary connection. And yet another strategy is to use multiple private WANs — one for one group of offices, one for another, and perhaps a third or fourth for others. HQ sites connect to all the carriers. Remote sites only connect to a single carrier. This is a viable solution for businesses with hub-and-spoke data flows, i.e. remote offices that talk to HQ, but don’t need to talk directly to one another very much. This approach reduces the failure domain such that only a limited number of offices are impacted when a particular carrier is having a bad day. All of these approaches spread their traffic across multiple carriers, expecting to be able to tolerate a single failure in the WAN. However, there is still a more fundamental single point of failure to ponder: the last mile. - If all of the carriers you lease WAN infrastructure from use the ILEC to connect your business to their network, then an ILEC problem could knock out all carrier connections. This is true if the service is public Internet or private WAN. - A single event could damage multiple physical lines in a shared conduit. This is the “Billy Backhoe” problem, or the “car crashed into a utility pole” problem. If the carriers you chose share a common route, one bad thing happening in meatspace can take both connections out of service. For enterprise WAN designers, the solution is to do your very best to insure the following. - Different carriers should enter and exit your building at different places. This is a tough request, especially if you don’t own the building or if the build costs are high. But if you can manage it, this reduces the chance that Billy Backhoe takes out both carriers. If all the conduit for communications lines enters the building in one place, it represents a point of risk. - Run some cables aerial, and others underground. Again, this is another tough one to manage if you inherited the physical plant — you’ve got what you’ve got and augmenting what you’ve got might be prohibitively expensive. But especially in the case of new construction, this is a step to consider that goes hand in hand with the step above. - Ask your carriers to provide you with a map illustrating how the last mile circuit gets from your building to the CO. If all carriers are taking the exact same route, ask for a diverse path. There’s a decent chance that there are multiple ways to get from your building to the CO. The idea is to have the paths be separated as much as possible to reduce the chance of something taking out both lines. Now, getting this map will take some time, but trust me — the information exists. The ILEC most certainly has it, and as a customer, you should be able to see it. At the same time, expect it to take a while before you’re seeing the diagram. I’ve never worked with an ILEC or carrier that moved all that quickly on a request like this. - Look for unique cable plants in the area that offer true last mile diversity. For example, a cable operator might be able to provide a physically distinct connection from some other carrier that relies on the ILEC. In addition, metropolitan areas often have fiber network operators who might be in (or near) your location. - Another option might be to use wireless technology as an alternate WAN connection. 4G/LTE router interfaces are easy to come by these days, although to get decent performance from them, you might need an outside antenna. But if you can pull it off, Billy Backhoe can’t kill it. Other types of wireless WAN providers might exist in your area, but using them is highly dependent on having line of sight to their towers. In addition, you might have to erect an antenna on the roof, something building owners don’t always grant permission to do. As a side note, if you have offices within several miles of each other, you might be able to connect them yourself via line-of-sight wireless. A wireless specialist in your area should be able to perform a feasibility study at a reasonable price. To the future. If you’ve conquered last mile diversity, the next challenge is managing routing across the infrastructure. For hybrid WANs, this is, frankly, a pain in the back side. The core issue is that routing protocols worry primarily about best path, but don’t have a sufficiently complex metric that reflects a true “best” at any given point in time. This is where software defined WAN solutions start to add value. Therefore, enterprises that are building out diverse WANs designed for business resilience might be interested in SD-WAN as a technology that can make better use of the links they are leasing. That said, no amount of SD-WAN is going to remedy a last mile problem that takes out all of your WAN circuits. Implementing the best physically redundant design possible is still necessary.
<urn:uuid:a127392a-ca7c-4084-91ca-41a95a3deaae>
CC-MAIN-2022-40
https://ethancbanks.com/enterprise-wan-design-last-mile-considerations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00299.warc.gz
en
0.947735
1,848
2.59375
3
Some folks think that DBAs and application developers inhabit different universes. At times this may seem to be the case, but the successful DBA must understand application development and the issues involved in programming and design. Although DBAs are usually are viewed as system "folk," they most definitely must be tied into the application development and design projects of their organization. Application code is written to access data in the database; the DBA must have a sound understanding of how that is happening, as well as ways to improve it. Application design includes database concerns such as interfacing SQL with traditional programming languages and the type of SQL to use. But every aspect of program coding will affect the usability and effectiveness of the application. Furthermore, each application program must be designed to ensure the integrity of the data it modifies – and that means understanding transaction integrity and the concept of “unit of work” and appropriate COMMIT logic. Not only does this impact data integrity, but the scope of the unit of work can have a significant impact on the ability for other concurrent workloads to access (or modify) the same data. And that can impede performance. Designing a Proper Database Application System Designing a proper database application system is a complex and time-consuming task. The choices made during application design will impact the usefulness of the final delivered application. An improperly designed and coded application may need to be redesigned and re-coded from scratch if it is inefficient, ineffective, or not easy to use. To properly design an application that relies on databases for persistent data storage, the system designer must match the application development languages and tools to the physical database design and the functionality of the DBMS being used. The first thing to be mastered, though, must be a sound understanding of SQL. SQL is coded without embedded data-navigational instructions. The DBMS analyzes each SQL statement and formulates data-navigational instructions "behind the scenes." The DBMS understands the state of the data it stores, and so it can produce efficient and dynamic access paths to the data. The result is that SQL, used properly, provides a quicker application development and prototyping environment than is available with corresponding high-level languages. Furthermore, the DBMS can change access paths for SQL queries as the data characteristics and access patterns change, all without requiring the actual SQL to be changed in any way. Of course, doing so can require action on the part of the DBA to rebind the application code to the DBMS. SQL sometimes can get very complex. DBAs are needed to help unravel the complexity and assure that the SQL is written as effectively as possible. Although programmers should be able to examine plan table or show plan information, the nature of doing so often falls to the DBA… especially in a production environment. The DBA Needs to be the Champion of SQL and Understand the “Framework” Being Deployed The DBA needs to be the champion of SQL. Programmers should be encouraged to do the work in the SQL, instead of breaking it apart and putting it into host language code. By putting the work into the SQL, the DBMS can control access paths. When the volume or nature of the data changes significantly all that is required to access the data differently is re-optimizing the SQL using DBMS commands. If the work instead is in the program code, a programmer would have to re-write the code to get the access paths to change... and who among us really believes that will ever happen? Additionally, application programs require an interface for issuing SQL to access or modify data. The interface is used to embed SQL statements in a host programming language (such as COBOL, Java, C, or Visual Basic). Standard interfaces enable application programs to access databases using SQL. There are several popular standard interfaces, or APIs (Application Programming Interfaces), for database programming, including ODBC, JDBC, SQLJ, and OLE DB. DBAs need to understand these APIs and how they are being used to develop database applications in their shops. The DBA also needs to understand the “framework” being deployed. For the Java programmer, J2EE offers a set of coordinated specifications and practices that together enable solutions for developing, deploying, and managing multitier enterprise applications. The J2EE platform simplifies enterprise applications by basing them on standardized, modular components. J2EE provides a complete set of services to those components and handles many details of application construction without requiring complex programming. So J2EE is not exactly a software framework, but a set of specifications, each of which dictates how various J2EE functions must operate. And then there is the Microsoft .NET framework, which provides a comprehensive development platform for the construction, deployment, and management of applications. The .NET framework provides CLR (common language run time) and a class library for building components using a common foundation. From a data perspective, the most important component the .NET framework is ADO.NET, which provides access to data sources such as a database management system. ADO.NET enables .NET developers to interact with data in standard, structured, and predominantly disconnected ways. A Good DBA Should Know ... Of course, there are additional development issues outside the application interface level that DBAs need to be aware of. Applications also interface with other types of system software. Application servers, transaction servers, message queuing software, and the like can complicate the development cycle - and interfere with performance. A good DBA will have an understanding of how this software interacts with the DBMS and be able to lend guidance to the application development team. Now, I am not saying that the DBA has to be a whiz-bang programmer, but he or she should at least have done the job in the past - and be comfortable interacting with those currently doing the job. If that is not the case, that DBA is not likely to be overly effective at their job.
<urn:uuid:bdbe9589-2314-483d-8cc0-37a2558d081b>
CC-MAIN-2022-40
https://www.dbta.com/Columns/DBA-Corner/The-DBAs-Guide-to-Application-Development-92526.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00299.warc.gz
en
0.92907
1,216
2.921875
3
Quoting the official documentation, Solidity “is a contract-oriented, high-level language for implementing smart contracts.” It was proposed back in 2014 by Gavin Wood and developed by several people, most of them being core contributors to the Ethereum platform, to enable writing smart contracts on blockchain platforms such as Ethereum. Solidity was designed around the ECMAScript syntax to make something web developers would be familiar with, but it is statically typed like C++, with support for inheritance, libraries, and user-defined data types. At the time Solidity was proposed, it had significant differences to other languages also targeting the EVM (e.g., Serpent, LLL, Viper, and Mutan) such as mappings, structs, inheritance, and even a natural language specification NatSpec. Like other programming languages targeting a Virtual Machine (VM), Solidity is compiled into bytecode using a compiler: solc. Smart Contracts can be seen as a computer protocol intended to complete some task according to the contract rules. In the cryptocurrencies context, smart contracts enforce transactions’ traceability and irreversibility, avoiding the need of a third-party regulator like banks. This concept was suggested by Nick Szabo back in 1994. This article is an introduction to Solidity from a security standpoint, created by the Checkmarx Security Research Team. As more and more people/organizations look to blockchain as a promising technology, and being willing to build on top of it, it is mandatory to apply software development best practices such as code review, testing, and auditing while creating smart contracts. These practices become even more critical as smart contracts execution happens in public with source code generally available. It is hard to ensure that software can’t be used in a way that was not anticipated, so it is essential to be aware of the most common issues as well as the exploitability of the environment where the smart contract runs on. An exploit may not target the smart contract itself, but the compiler or the virtual machine (e.g., EVM) instead. We cover that in the next sections, providing a Proof-of-Concept that demonstrates the discussed topics. PreambleIn the context of Ethereum (abbreviated Eth), Smart Contracts are scripts that can handle money. These contracts are enforced and certified by Miners (multiple computers) who are responsible for adding a transaction (execution of a Smart Contract or payment of cryptocurrency) to a public ledger (a block). Multiple blocks are called blockchain. Miners spend “Gas” to do their work (e.g., publish a smart contract, run a smart contract function, or transfer money between accounts). This “Gas” is paid using Eth. PrivacyIn Solidity, private may be far from what you may expect, mainly if you’re used to Object-Oriented Programming using languages like Java. A private variable doesn’t mean that someone can’t read its content, it just means that it can be accessed only from within the contract. You should remember that the blockchain is stored on many computers, making it possible for others to see what’s stored in such “private” variables. Note that private functions are not inherited by other contracts. To enable private functions inheritance, Solidity offers the internal keyword. pure/view functionsPreventing functions from reading the state at the level of the EVM is not possible, but it is possible to prevent them from writing to the state ( i.e., view can be enforced at the EVM level, while pure cannot). The compiler started enforcing that pure is not reading the state in version 0.4.17. Source ReentrancyReentrancy is a well-known computing concept, and also the cause of a $70M hack back in June 2016 called the DAO (Decentralized Autonomous Organization) Attack. David Siegel authored “Understanding The DAO Hack for Journalists” a complete events timeline and comprehensive explanation of what happened. “In computing, a computer program or subroutine is called reentrant if it can be interrupted in the middle of its execution and then safely be called again (“re-entered”) before its previous invocations complete execution.” (Wikipedia). By using a common computing pattern, it was possible to exploit a Smart Contract. It is still possible. The call() function is the heart of this attack, and it is worth noting that it: - is used to invoke a function in the same contract (or of another contract) to transfer data or Ethereum; - does not throw, it just returns true/false; - triggers the execution of code and spends all the available Gas for this purpose; there’s no Gas limit unless we specify one; - “be prepared” - any function running external code is a threat; - These functions: <address>.transfer(uint256 amount)/ <address>.send(uint256 amount) return (bool) are safe against Reentrancy as they currently have a limit of 2300 Gas; - if you cannot avoid using call(), update the internal state before making an external call.
<urn:uuid:239b07ec-0346-423a-95e4-124caabd1794>
CC-MAIN-2022-40
https://checkmarx.com/blog/checkmarx-research-solidity-and-smart-contracts-from-a-security-standpoint/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00299.warc.gz
en
0.929888
1,075
2.8125
3
Last time, I suggested that the obvious analogy between IT architecture and real building architecture was potentially flawed, because of the dramatic differences in their medium of expression, and that another analogy (with music) might be more appropriate in some ways. The real lesson, though, is that all such analogies have serious limitations. While these analogies are intuitively appealing, they ultimately fail as models when it comes to the most important property of a useful model: the ability to straightforwardly apply what one learns from the model to the thing being modeled. This is because these analogies imply too many irrelevant details that obscure the essentials, the things that really matter. How can we get at those essentials? Ideally, how can we develop a model for architecture that not only applies to “our kind” of architecture, but might also be applied to civil architecture and music, because it captures and expresses those things common to all three disciplines, the things that enable those appealing analogies? By answering the question I originally posed,—what is it that we want “our kind” of architecture to do for us?—that led to the consideration of analogy as an approach. This time, I’ll start by trying to answer that question, and then start wrestling with how we might get from there to a useful definition of IT architecture. The conventional wisdom about IT architecture has historically included statements like: - Architectural decisions are more abstract than design decisions; - Architectural decisions are global in scope, i.e., architecture is about a holistic perspective, or the big picture; and - Architectural decisions are hard to change. These three observations seem to flow into one another. Because architectural decisions are more abstract, they apply more broadly, i.e., are more global, and thus affect many more things, increasing the consequences of changing them. But these observations, even if they’re correct (and I’ll argue later that they’re not), don’t really help us understand what the benefits of thinking architecturally might be. And if architecture is just a more abstract form of design, where does architecture end and design begin? Why call it architecture rather than, for example: - Preliminary (as opposed to final) design, or - Abstract (as opposed to concrete) design, or - Logical (as opposed to physical) design, or - High level (as opposed to low level) design, or - General (as opposed to detailed) design? These terms are frequently used, so we must ask, if they’re not architecture, then how does architecture differ from them? Not much help here either understanding what architecture is good for or how it differs from design. Lately, other kinds of statements are increasingly heard: - Architecture is about multiple views addressing the concerns of different stakeholders. - Architecture is about adaptability. - Architecture (especially enterprise architecture) is about aligning business and technology, i.e., delivering business value from IT investments. The first of these recent characterizations is more about how one should represent an architecture, though it implies something important about what we hope to get out of architecture: reconciliation and integration of multiple stakeholder perspectives. The second reflects the current obsession with being responsive to change. Adaptability is only one of many “-ities”, sometimes called pervasive attributes, system qualities or nonfunctional requirements. These are properties of a system as a whole, which cannot be readily isolated in a single component. While adaptability is certainly important in many situations, it is probably unduly restrictive to make it the primary focus of architecture. The last, though, more directly addresses the question of the value of architecture, and it is inclusive of the other two perspectives on architecture. If this is what we expect of architecture, i.e., aligning technology with the needs of the business, what does architecture need to be to deliver on this promise? One answer, the premise of the agile programming movement, is that architecture, specifically when caricatured as “Big Design Up Front” (BDUF), can’t possibly deliver on this promise, and should just get out of the way. While there are a lot of very useful ideas in agile programming, there are too many of us whose first hand experience with architecture has shown that it works in real-world situations to “throw the baby out with the bath water”. Besides, if architecture is really distinct from design, it can’t be confused with “Big Design Up Front”. One of the most valuable insights of the agile programming movement is that you can’t build a system that delivers business value if you can’t define what business value you need to deliver. Agile programming traces this problem to two sources: lack of understanding of what is needed, and the inherently dynamic “moving-target” nature of what is needed.
<urn:uuid:d22ac37c-1c13-4ddc-a189-ea0a4c6d0969>
CC-MAIN-2022-40
https://cioupdate.com/the-architecture-of-architecture-part-iv-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00299.warc.gz
en
0.954949
1,022
2.703125
3
Children nowadays prefer internet gaming over outdoor plays. If you are raising a child in the digital age, you have to be aware of Internet Gaming Disorder. Keep your kid’s gaming addiction in check with parental control apps. We are more connected than ever, with many children having access to tablets and smartphones even before they learn to walk and talk. A younger child instantly gets attracted to smartphones, which opens up a world of wonders for them. And as they grow up, this fascination develops into dependence on the devices to keep them amused and occupied. When digital tools have changed the reality of children’s lives to this extent, it is our responsibility to help them separate fact from fiction. If your child spends long hours playing online games, you might be afraid of them being addicted to it. Continue reading to identify whether it is a disorder and if professional intervention is needed. What is Internet Gaming Disorder? According to World Health Organization (WHO), a gaming disorder is – a pattern of gaming behavior (including both digital and video gaming) characterized by unconstrained control over gaming, which is given priority over other interests and is continued despite facing negative consequences from the behavior. To be classified as ‘gaming disorder’ – The behavior must be severe enough to result in significant adverse effects in personal, family, social, educational, or occupational areas of functioning for more than twelve months. Parents can use the kid’s safety app to save their children from having a gaming disorder. But first, we need to understand – all children are different, and there are various reasons why some kids are drawn to gaming. Let us try to figure out what they are. Why is your child focused on online gaming? Gaming can be a very positive experience; the problem arises when it goes out of proportion, and they start neglecting other areas of their lives to play online games. Over time, a child may start to turn to the game as a way of coping with stress and difficult life issues, such as divorce-separation in the family, or pressure of the studies. Stepping into a virtual world can provide a feeling of relief from the stress of daily life. The brain’s reward center gets triggered by gaming, which releases dopamine (one of the ‘feel-good hormones’,) associated with feelings of euphoria, bliss, concentration, and motivation. While gaming, dopamine can surge as a child gets thrilled by reaching a new high score or taking down an opponent, resulting in a temporary feeling of bliss. Being enjoyable and engaging for kids, online gaming is a popular choice for filling downtime, but when the appeal goes beyond fun, it may be a cause for alarm. How to know if your child is addicted to online games or not? In addition to being a great medium for recreation, online games can also be used as a learning tool. For many kids, games are a way to connect with friends and to relax after a tedious day. But unless it is in moderation, the habit of online gaming can play havoc in your child’s life. 10 Signs your child is having Internet Gaming Disorder include: - Gaming gets into the way of your child’s ability to complete homework, getting ready for school on time, or focus on educational needs. - Gaming negatively impacts their relationships with you, siblings, other family members, and friends. - They talk about their game continuously. - They play for hours, stopping only when asked. - They experience uncontrollable outbursts or sometimes physical aggression when asked to stop gaming. - Gaming takes precedence over other important areas in their lives. - It starts taking its toll on healthy habits such as eating, hygiene, and exercise. - Gaming results in significant changes in mood. - They appear to be distracted, depressed, or lonely, as some games can be really isolating. - Physical symptoms might arise, such as dry or red eyes, soreness in the fingers, backache, neckache, or headaches. Studies suggest that gaming disorder affects only a small percentage of gamers who engage in digital gaming activities. But parents should always look for the signs mentioned above as problematic gaming can negatively affect teens and escalate over time. And whenever needed, they must use parental controls to build healthy digital habits in the children. What are the benefits of online gaming? - For kids struggling with self-confidence, success in gaming can raise feelings of self-esteem. - A well-played game can trigger positive emotions after a difficult day at school or struggling to connect with peers. - According to research, gamers show improvements in sustained attention and selective attention. - The brain regions involved in attention are more efficient in players and require less activation to sustain attention on critical tasks. - Evidence suggests that video games can increase the size and efficiency of the regions of the brain responsible for visuospatial skills. - Online games can be a fun way to connect with a peer for the kids who are introverts by nature and struggle to initiate conversations or to enter into the groups. It is essential to know your child’s specific interest in gaming to determine whether or not gaming provides a healthy outlet or could become problematic. As soon as you find your kid connecting to other kids only through gaming, you need to take the following steps to abate the addiction. How to treat your child’s Internet Gaming Disorder? - Help your child understand that success in the gaming world is virtual or imaginary, and has nothing to do with real-life success. It is more worthwhile to get good grades, earn real money, learning a real-life skill than living in fantasy worlds. - Set definite rules for a gaming time limit and make sure that you strictly enforce it with the help of parental control apps. - Create consequences for not following any of the rules, such as banning the games for a week if they exceed the assigned time limit. - Reward your child with game time or deduct the game time based on them fulfilling or failing a goal. - With parental controls, you can set a schedule for them, allowing to play only on certain days and at a specific time. - Prevent your child from playing open-ended exploration games and ask to choose games that they can be completed within a short duration. - Introduce your child to other fun things of varying interests that offers enjoyment, and could even earn him real-life points. How can Bit Guardian Parental Control can help? Bit Guardian Parental Control is a wonderful child monitoring app that offers many helpful features to curb your child’s internet addiction. It allows parents to block any app that their child might be addicted to, or parents can block Play Store, so kids cannot download any new app. It also enables parents to limit the screen-time and plan a schedule for bedtime and other apps. Involve your child in physical activities like playing sports, biking, or running to less physical activities such as reading, learning to play an instrument, or going out with friends. Be a smart parent, use parental control apps for Android, and cure your child of Internet Gaming Disorder.
<urn:uuid:c27f8616-4f76-4de9-b73a-408c0fb23637>
CC-MAIN-2022-40
https://blog.bit-guardian.com/how-does-internet-gaming-disorder-affect-children/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00299.warc.gz
en
0.942664
1,475
2.734375
3
SAN FRANCISCO (AP) — IBM pitted a computer against two human debaters in the first public demonstration of artificial intelligence technology it's been working on for more than five years. The company unveiled its Project Debater in San Francisco on Monday, asking it to make a case for government-subsidized space research — a topic it hadn't studied in advance but championed fiercely with just a few awkward gaps in reasoning. "Subsidizing space exploration is like investing in really good tires," argued the computer system, its female voice embodied in a 5-foot-tall machine shaped like a monolith with TV screens on its sides. Such research would enrich the human mind, inspire young people and be a "very sound investment," it said, making it more important even than good roads, schools or health care. The computer delivered its opening argument by pulling in evidence from its huge internal repository of newspapers, journals and other sources. It then listened to a professional human debater's counter-argument and spent four minutes rebutting it. After closing arguments it moved on to a second debate about telemedicine. An IBM research team based in Israel began working on the project not long after IBM's Watson computer beat two human quizmasters on a "Jeopardy" challenge in 2011. But rather than just scanning a giant trove of data in search of factoids, IBM's latest project taps into several more complex branches of AI. Search engine algorithms used by Google and Microsoft's Bing use similar technology to digest and summarize written content and compose new paragraphs. Voice assistants such as Amazon's Alexa rely on listening comprehension to answer questions posed by people. Google recently demonstrated an eerily human-like voice assistant that can call hair salons or restaurants to make appointments. But IBM says it's breaking new ground by creating a system that tackles deeper human practices of rhetoric and analysis, and how they're used to discuss big questions whose answers aren't always clear. "If you think of the rules of debate, they're far more open-ended than the rules of a board game," said Ranit Aharonov, who manages the debater project. As expected, the machine tends to be better than humans at bringing in numbers and other detailed supporting evidence. It's also able to latch onto the most salient and attention-getting elements of an argument, and can even deliver some self-referential jokes about being a computer. But it lacks tact, researchers said. Sometimes the jokes don't come out right. And on Monday, some of the sources it cited — such as a German official and an Arab sheikh — didn't seem particularly germane. "Humans tend to be better at using more expressive language, more original language," said Dario Gil, IBM's vice president of AI research. "They bring in their own personal experience as a way to illustrate the point. The machine doesn't live in the real world or have a life that it's able to tap into." There are no immediate plans to turn Project Debater into a commercial product, but Gil said it could be useful in the future in helping lawyers or other human workers make informed decisions.
<urn:uuid:87bb19b2-a362-4599-829b-a10cbada168e>
CC-MAIN-2022-40
https://www.mbtmag.com/home/news/21102026/ibm-pits-human-debaters-against-computers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00299.warc.gz
en
0.961268
650
2.640625
3
Computer Networks can be divided into various types depending upon their size and usability. The size of a network can be assessed by its geographical distribution. It can be as small as a room with a few devices/computers or as widespread as across the world, with a million of interconnected devices. Some of the most important types of computer networks are : In this article, we will try to understand PAN in detail. Personal Area Network(PAN), is a local network designed to transmit data between personal computing devices like PCs, personal digital assistants (PDAs) and telephones. A personal area network handles the interconnection of IT devices at the surrounding of a single user. Generally, PAN contains appliances such as- cordless mic and keyboards, cordless phone, Bluetooth etc. Gaming devices, like a game console system, may also be set up on a PAN.
<urn:uuid:9463555c-15ca-475f-b36b-b31cb6f2f7ed>
CC-MAIN-2022-40
https://networkinterview.com/pan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00299.warc.gz
en
0.939147
190
3.71875
4
"All people seem to need data processing." Although true, that sentence isn't necessarily a statement of fact. Rather, it's a simple mnemonic device created to help people remember the seven layers of the Open Systems Interconnection (OSI) model—application, presentation, session, transport, network, data link, and physical. But what is the OSI model, and why is understanding it important to understanding how complex computer networks operate? Short answer: the OSI model allows us to talk to each other about what's happening where in a network. In sticking with the structure of the OSI model, we'll start with the basics and then provide more in-depth explanations of each layer, ending with a closer look at one of the most important yet undervalued layers of all. The OSI Model Explained First conceived of in the 1970s and formalized in 1984, the OSI model isn't a set of hard and fast rules, but it does provide a big-picture view of how networks operate, from physical hardware to end-user applications. It's also valuable when things go wrong, allowing network operations professionals to pinpoint specific layers to troubleshoot. If someone says, "Well, that's a Layer 7 problem," what they're really saying is that there could be an issue with an application like a web browser. The OSI model has two major components: the basic reference model and protocols. The basic reference model is just another way to describe the 7-layer model. In this model, a layer in your network works with the layers immediately above and below it, meaning tools in Layer 4 work directly with tools in Layers 3 and 5. Protocols allow each layer on a host to communicate with the corresponding layer on a different host. Protocols are one reason why you can send an email from a Layer 7 application like Outlook from your desk in Seattle to someone who uses Gmail in Singapore. Now that we've gone over a quick sketch of what the OSI model is, let's start peeling this seven-layer onion. OSI Model Layers Although the OSI model has a top-down construction, we're going to start at the bottom — Layer 1 — and work our way up. |1||Physical||If you've ever had to troubleshoot anything electronic, Layer 1 is where you'd answer the question, "Is it plugged in?" Layer 1 also includes layouts of pins, voltages, radio frequency links, and other physical requirements. It's a media layer used to transmit and receive symbols, or raw bits of data, which it converts into electrical, radio, or optical signals.| |2||Data Link||This digital stratum is all about media, acting as an avenue for node-to-node data transfers of frames—simple containers for single network packets—between two physically connected devices. It's where you'll find most of the switches used to start or end communication between connected devices. Layer 2 is comprised of two sublayers: MAC, or Media Access Control, and LLC, or Logical Link Control. MAC determines how devices in a network gain access to a medium and permission to transmit data. LLC identifies and encapsulates network layer protocols and controls error checking and frame synchronization.| |3||Network||Another media layer, Layer 3 is home to IP addresses and routers that look for the most efficient communication pathways for packets containing control information and user data, also known as a payload. If a packet is too large to be transmitted, it can be split into several fragments which are shipped out and then reassembled on the receiving end. Layer 3 also contains network firewalls and 3-layer switches.| |4||Transport||Layer 4 is a host layer that generally functions as a digital post office coordinating data transfers between systems and hosts, including how much data to send, the rate of data transmission, data destinations, and more. Although they're not included in the OSI model, Transmission Control Protocols (TCP) and User Datagram Protocols (USD) are usually categorized as Layer 4 protocols. Layer 4 is also where you'll find gateways and additional firewalls.| |5||Session||Layer 5 is a host layer that acts like a moderator in that it controls the dialogue between computers, devices, or servers. It sets up pathways, limits for response wait time, and terminates sessions.| |6||Presentation||This host layer is where data is translated and formatted so applications, networks, and devices can understand what they're receiving. Characters are encoded and data compressed, encrypted, and decrypted on Layer 6.| |7||Application||This top-of-stack host layer is familiar to end users because it's home to Application Programming Interfaces (API) that allow resource sharing, remote file access, and more. It's where you'll find web browsers and apps like email clients and social media sites.| Because Layer 7 is complicated and omnipresent, let's take a closer look. Layer 7 of the OSI Model In a lot of ways, this is where the enterprise lives. Layer 7 is the point at which customers will directly engage with your business. The application layer identifies communication components, determines resource availability, and ensures that communication runs smoothly. This layer is what allows access to network resources, so you'll likely recognize its most common protocols: - Hypertext Transfer Protocol (HTTP) - File Transfer Protocol (FTP) - Simple Mail Transfer Protocol (SMTP) Interestingly, most network traffic monitoring solutions don't actually dive into Layer 7, instead sticking to Layers 3 and 4 for their analysis. The problem with this approach is that you then lose out on a ton of unique behavioral data that can help with everything from load balancing to cyber threat mitigation. Because Layer 7 interacts with both the end user (whether that's a programmer or a customer) and the application, analyzing the traffic on this layer provides a level of granularity that other layers lack. Think of it like eavesdropping: with L2-L4 visibility, you can tell two people are talking to each other from either side of a building. With L7 visibility, you know who each person is, which rooms they're standing in, and what they're actually saying to one another. More organizations and vendors are beginning to realize the value of Layer 7 visibility for performance analytics, however, and even moreso for enterprise security. For example, analyzing L7 traffic in real time gives security teams the ability to detect suspicious behavior like malicious DDoS traffic and mitigate the threat without impacting legitimate visitors. Full-fledged network traffic analysis takes this process a step further by adding behavioral analytics for threat detection and response, which you can learn about in this blog.
<urn:uuid:5f66de5d-bf34-4c23-b508-0f0b963a9968>
CC-MAIN-2022-40
https://www.extrahop.com/company/blog/2019/the-osi-model-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00299.warc.gz
en
0.923919
1,404
3.6875
4
Elliptic Curve Crytography (ECC) is lighter, faster and more secure than RSA The SSL/TLS certificate that uses elliptic curve cryptography (ECC) in place of the RSA algorithm to encrypt the data transferred between the user and the website is called the ECC SSL certificate. The ECC SSL certificates are much more secure than the widely used RSA SSL certificates. For a while now the RSA public key cryptosystem has been the standard in the SSL/TLS industry. But RSA’s days are numbered. Everyone should be moving towards Elliptic Curve Cryptography for SSL/TLS and most other PKI functions, too. That’s because ECC offers myriad benefits. - It’s more secure - It requires less CPU resources - It scales better - It offers better privacy Granted, not all CAs offered ECC-capable SSL/TLS certificates and there are still a handful of popular servers and platforms that have yet to add support. But the industry is moving away from RSA and ECC will soon become the new standard. ECC is more Secure Elliptic Curve Cryptography is based on mapping points on an elliptic curve. It sounds complex, but it’s really not once you see it in action. Several points on a given elliptic curve are chosen and plotted, each time reflecting the third point across the X access, and it continues until sufficient entropy has been achieved. Because of the nature of the cryptography, ECC has a distinct advantage over its RSA counterpart: it’s not as vulnerable to quantum computing. Owing to the massive processing power of quantum computers – which will be viable in the next decade – the method behind RSA, prime factorization, will be rendered fairly ineffective early on. ECC will not be invulnerable to quantum computing, but it will have more hardness and there elliptic curve-based post-quantum cryptosystems in development as we speak. ECC requires less CPU resources There’s a reason that many enterprises decide to load balance and shift SSL/TLS functions to an edge device to free up resources on their application servers: RSA is expensive. The keys are massive, almost unwieldy. The current standard is 2048-bit, though some go has high as 4096. That taxes a server. Especially at scale where you’re performing thousands of handshakes at a time and then encrypting and decrypting data from each connection. ECC doesn’t have this problem, its keys are substantially smaller. That, in turn, is less taxing on servers. An equivalent ECC key to the standard 2048-bit RSA one would be 224-bit. A little more than 1/10th the size of its RSA counterpart. That means less overhead during handshakes, which means a faster website and a better user experience. ECC Scales Better We’ve already discussed how resource-hungry RSA is as a cryptosystem, well that’s not going to get any better as encryption standards become more stringent and more hardness is demanded from private keys. And a big part of the problem is that as RSA key sizes increase, the improvement in security is not commensurate to the growth of the key. A 4096-bit key doesn’t provide double the security of a 2048-bit one. As keys grow increasingly larger, the gains made in security continue to decline. And the server will continue to be taxed more and more. DigiCert SSL Certificates and Save Up to 29%! Get DigiCert SSL/TLS Certificates from a globally trusted CheapSSLsecurity.com. DigiCert offers unlimited server licesenses, unlimited reissuances, 256-bit encryption, dynamic trust seal, and more. ECC doesn’t share this problem, its keys are smaller and as they grow bigger the security increases, too. As stronger encryption becomes mandated, using RSA is going to become prohibitive, which is why making the switch now will put you ahead in the long run. ECC offers perfect forward secrecy RSA key exchange should not be used anymore. It faces known vulnerabilities and, again, it’s resource hungry. It also doesn’t allow for Perfect Forward Secrecy – which should be considered best practice at this point. So, what is Perfect Forward Secrecy? It adds another layer of privacy by protecting the integrity of session keys, even in the event the private key was compromised. When a handshake occurs at the outset of an HTTPS connection, the public key encrypts a shared secret that is decrypted by the private key and used to generate the symmetric session keys that will facilitate communication. You don’t actually use the public/private key pair to communicate, those are just for authentication and exchanging session keys. With RSA, unless it’s really jimmy-rigged, you can decrypt the session keys if you crack the private key. RSA safeguards against that by using large private keys, but that ends up being a double-edged sword because, as we just discussed, RSA uses a lot of resources. That more or less prohibits the use of ephemeral keys – session keys that are regularly switched out. All of the decryption associated with those key exchanges only uses up more resources. ECC on the other hand can handle the use of ephemeral keys and provides perfect forward secrecy, which protects individual connections even if the private key gets cracked. Should I switch to ECC? Before you make a change to an Elliptic Curve-based cryptosystem you’ll need to check on a few things. All of the major browsers support ECC – provided they’re up to date – so you wouldn’t have to worry about your website breaking for users. Most servers support ECC, too. The problem is certain control panels have yet to add support (shame on them). If you have direct access to your servers then switching should be no problem at all. If not, contact your hosting provider and check whether ECC is an option – and if not when it will be. How do I get an ECC SSL Certificate? Simple. Just follow the same procedures you would normally follow when ordering an SSL certificate from CheapSSLsecurity.com. If the SSL certificate supports it, simply select the option and then provide the CA with an ECC Certificate Signing Request. This can be generated on any server with ECC support. Granted, not every certificate offers ECC, so you’ll need to choose a product with full ECC support. We recommend: |ECC SSL Certificates||Years||Price/Year| |DigiCert Secure Site Pro EV SSL||$1499.00$1088.00||Add to Cart| |DigiCert Secure Site Pro||$995.00$691.67||Add to Cart| |Symantec Secure Site Pro||$995.00$543.56||Add to Cart| |Symantec Secure Site Pro with EV||$1499.00$779.19||Add to Cart| Elliptic Curve Cryptography is set to become the backbone of SSL/TLS over the next several years. It’s only hamstrung by slow adoption on the part of platforms and servers. But that will change soon and you don’t want to be left behind when it does. Purchase a Wildcard SSL Certificate & Save Up to 73%! We offer the best discount on all types of wildcard SSL Certificates with DV and OV validation. We offer wildcard certificates from the leading CAs, including Comodo CA, Sectigo, Thawte, GeoTrust, and RapidSSL starting for as little as $52.95 per year.
<urn:uuid:0c81b548-e83b-4881-bb22-13db55b87c18>
CC-MAIN-2022-40
https://cheapsslsecurity.com/p/what-is-ecc-ssl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00499.warc.gz
en
0.922635
1,635
2.671875
3
Researchers use Instagram mask selfies to improve biometric facial recognition algorithms Some facial biometrics developers fear the COVID-19 pandemic will affect their business, as the number of people wearing masks has increased. Because they cover essential facial features, masks make it harder for algorithms to recognize users, so researchers need data to improve them. Researchers are collecting social media photos of people wearing masks, without their consent, to feed them into biometric facial recognition algorithms to improve detection and accuracy, writes Cnet after identifying thousands of photos, mostly taken from Instagram, available in public data sets. The COVID19 Mask Image Dataset published on Github in April had over 1,200 pictures taken from Instagram. It used AI startup Workaround to filter the images. “We were inspired by all the companies that were launching free tools and everything they can do to help,” Workaround CEO Wafaa Arbash told Cnet. “We have these public images from Instagram, so these are not private images. We were just searching and getting the right data.” A number of facial recognition companies have asked employees to send selfies wearing masks or digitally add masks on existing photos, which is how the U.S. National Institute of Standards and Technology (NIST) plans to test the technology. NIST has announced a series of tests for face mask effect on facial recognition accuracy. The first step will be to digitally add synthetic masks to faces and test 1:1 verification algorithms already submitted. New algorithms can also be suggested for testing. “We will first mask only the probe image, leaving the reference photo as is. Later, we will consider the effect of masking both images. We will quantify the effect of masks on both false negative and false positives match rates and issue a public report,” the organization wrote. For their research, Arbash explained the Instagram photos were found using mask-related hashtags. They had initially collected 3,000 pictures, but reduced the list to 1,200. Cnet writes one of the pictures included a child, yet Arbash said it may have been an error. The people in the photos were never asked for consent, as their profiles were public and not set to private, Arbash said. “We’re not making any money off of this, it’s not commercial,” Arbash said. “The goal and the intention was to help any data science or machine learning engineers who are working to fix this issue and help with public safety.” Researchers at Wuhan University in China have allegedly created the Real World Masked Face Dataset which contains over 5,000 photos of masked faces “from massive internet resources.” Face mask detection technology is a priority for developers, yet privacy advocates have voiced concerns regarding the methods used to compile the databases.
<urn:uuid:98366220-44f5-462a-8035-3a30bb25ed58>
CC-MAIN-2022-40
https://www.biometricupdate.com/202005/researchers-use-instagram-mask-selfies-to-improve-biometric-facial-recognition-algorithms
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00499.warc.gz
en
0.954926
587
2.546875
3
Black History is American History Too often, the stories and histories of Black, Indigenous, and People of Color are not given the prominence they so richly deserve. The struggle, the brilliance, and the cultural wealth of these communities have greatly enhanced the American experience. Many of us learned a United States history devoid of a true illustration of Black excellence, resistance and joy. With the focus on Black history this month, we have an opportunity to expand our understanding of the invaluable role Black Americans have played in the creation and sustainment of our nation. As we commit ourselves to having an inclusive culture where everyone is valued, let us embrace Black History Month in this spirit. As Carter G.Woodson the founder of Black History Month believed, appreciating a people’s history is a prerequisite to equality. It is our responsibility as leaders and as citizens to honor the stories and celebrate the great accomplishments of Black people to America’s history. We do this because celebrating Black History is a celebration and recognition of the struggle for Black equality in America. This is a celebration and a struggle that we can all amplify this month and all year long.
<urn:uuid:cea89305-4a71-4526-90d4-d356ef7eb94f>
CC-MAIN-2022-40
https://www.aryaka.com/blog/black-history-is-american-history/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00499.warc.gz
en
0.933992
230
3.65625
4
Why Does My Computer Blue Screen? Most people who use PCs have heard of the “Blue Screen of Death”, but the blue screen isn’t the terrifying problem that it once was. Getting a blue screen doesn’t mean that your computer is toast. Still, the blue screen is often a sign that there is a deeper issue with your PC that needs to be repaired. Here is what you need to know about the blue screen, why it happens, how to troubleshoot the problem, and what can be done to prevent it from happening. What the Blue Screen Really Means The blue screen happens when Windows encounters a critical error that stops the operating system from running. These critical errors can be the result of faulty hardware, faulty or low level hardware drivers, or faulty or low level apps that run within the Windows kernel. Years ago, getting persistent Blue Screen of Death errors meant there wasn’t much to be done except re-install Windows from scratch and hope you didn’t have hardware issues. This isn’t the case with today’s versions of Windows. With Windows 10, the blue screen usually occurs just before the computer restarts itself. If it doesn’t restart on its own, a reboot is your first step to fixing the problem. But why does the computer get a blue screen in the first place? When your screen goes blue, Windows is trying to stop its processes and restart the system, while also gathering data about the critical failure so that this information can be relayed to Microsoft for future troubleshooting and support features. Once the computer reboots, it is usually once again functioning, and it may work as though nothing ever happened. The important thing to remember when you get the blue screen is that you need to determine why the blue screen happened so that you can prevent it from happening again. If you continue to run the same drivers, software, and hardware without making adjustments, the problem is unlikely to go away on its own. Troubleshooting the Blue Screen of Death Troubleshooting the blue screen is easier today than it has ever been before. When Windows encounters a critical failure, it automatically gathers data about the failure and restarts the computer. Depending on the version of Windows that you have, the blue screen may give you detailed information about what caused the error. If you have Windows 10, the screen may display an error name or description. If the blue screen goes by too fast for you to write down the information, you can still access these error logs in the Action Center found in the Control Panel. In Windows 7 the Action Center is under System and Security, while in Windows 8 and 10 it is under Security and Maintenance. If you follow the troubleshooting steps in the Action Center but you still keep getting the blue screen, there are some additional troubleshooting steps that you can take to discover the problem. - If you have recently installed new hardware and hardware drivers that may be causing the problem, disconnect the hardware to see if that resolves the issue. If not, try uninstalling or updating the drivers. - If you have recently installed software that could have caused the error, try uninstalling the software and see if that resolves the issue. - Do a scan on your computer for malware and viruses, as these can often get into the Windows kernel and cause the blue screen. - If the blue screen happens during an update, revert back to your previous version of Windows using the settings in the control panel or by using system restore. - If you are not able to troubleshoot because the computer keeps going to a blue screen, try booting up in safe mode to troubleshoot the problem. - When all else fails, do a system restore to a date when you know the computer was working properly. From there you can determine what made the change that caused the problem. Help with Troubleshooting and Repair If you are not familiar with the Windows Control Panel, you may need to get assistance in troubleshooting your blue screen. You should also seek help if none of these troubleshooting techniques fix the PC. You should also reach out to us at Bristeeri Technologies for professional PC repair if you are able to access the error information but are unable to find what the error description means. If you are not comfortable running troubleshooting steps on your own, we can help you troubleshoot the problem and repair it so that it doesn’t happen again. Sometimes even the basic troubleshooting that you can do through the Action Center is not enough to stop the problem from happening again. You may want to have us check out your PC after a blue screen error to ensure that the issue has been resolved. We are familiar with the error codes that are stored in the minidump file, and we can use those codes to determine the exact cause of the blue screen. From there, we can quickly repair the problem so that you don’t lose your work in the future. Two Cautionary Notes for DIY PC Repairs We absolutely encourage you to learn more about your PC, but keep these two things in mind if you’re taking on this project yourself: - You do have the ability to do an internet search for the error code that you were given when you got the blue screen, if you know how to access the error code. However, not all error codes are available online. Following the wrong troubleshooting steps for your issue could cause additional problems with your system. If you have any questions about what your error code means, you can contact us for assistance. - Additionally, there are some online tools and software applications that will come up when you do a search for troubleshooting for the blue screen. Not all of these tools and applications are trustworthy. If you aren’t absolutely certain about the source of the tool or application, you should not use it. Preventing the Blue Screen of Death In addition to finding the root cause of the blue screen and repairing it, there are some other things that you can do to ensure that you don’t get the Blue Screen of Death in the future. Some tips for avoiding critical errors include: - Always use antivirus and antimalware software. - Only download trusted software from trusted companies and websites. - Always install the most updated version of drivers for hardware. - Promptly run Windows updates. If you follow these tips, you will be much less likely to have critical errors that cause the Blue Screen of Death. The Bottom Line Twenty years ago, the Blue Screen of Death could be very scary indeed. But with advanced technologies and the newest versions of Windows, blue screens do not happen as frequently as they once did. Blue screens also do not mean that you have to get a new computer or spend a lot of money on repairs. We can easily troubleshoot and repair your PC of blue screen stop errors. Microsoft has made it very easy for the average user to troubleshoot and repair some blue screen causes. However, you should never do something to your computer’s operating system that you are not completely sure is the right step to take. In trying to repair the error so that the blue screen doesn’t happen again, you could cause the computer to have even more serious issues. Whenever you are in doubt about the performance of your PC, contact us right away to get assistance with your PC repair. We are experienced and knowledgeable in these and other PC repair issues.
<urn:uuid:1f849ec5-096d-47d0-9c1a-b563cd050f4e>
CC-MAIN-2022-40
https://bristeeritech.com/it-security-blog/why-does-my-computer-blue-screen/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00499.warc.gz
en
0.928089
1,543
2.75
3
Intelligent transportation systems (ITS) are quickly becoming a vastly connected network of smart infrastructure that provides the circulatory system to many up and coming smart cities. Two of the most promising technologies that will be looked upon to fully bring about the smart city vision are edge computing and 5G wireless networks. In part one of this three-part series on intelligent transportation systems, we looked at the effects of the Internet of Things (IoT) revolution and how the introduction of increasingly connected devices and technologies was enhancing the abilities and operational performance of ITS systems and their component technologies in several countries around the world. In this article, we’ll be looking at two particular technologies, edge computing and 5G networks, and detailing three ways in which they are helping to radically change the way we travel today and how they may do so in the future. To start us off, let’s begin by taking a look at how edge computing and 5G networks could affect smart cities and infrastructure. Smart Cities and infrastructure Smart cities and intelligent infrastructure are just a few of the ways in which data-driven technologies have become steadily more engrained into our way of life. While all intelligent transportation systems rely on data to function to their optimal ability, not all of the towns and cities where they are found can be described smart cities. As intelligent and connected devices continue to become more prominent within our towns and cities, these additions will add new sources of information to collect from as well as more potential destinations to pass data on to. With the amount of data begin generated only growing, ITS systems will need ways and means of dealing with this traffic efficiently. The use of edge computing is one proposed solution to this problem as this would allow for much of the data being collected to be collected and processed much closer to its source and, in some cases, even within the edge device itself. This would allow for less traffic needing to be sent to the cloud and therefore reduce traffic loads. When coupled with 5G wireless communications networks, these technologies could significantly reduce data sharing and latency by introducing much more capable communication networks into the equation. Smart city infrastructure will likely come to heavily rely upon 5G networks to connect its various components so don’t be surprised as you start to see more and more 5G technologies begin to hit the market. C-ITS, or cooperative intelligent transportation systems, are ITS systems that use wireless communications technologies to allow vehicles to communicate with each other, traffic signaling systems, and other roadside infrastructure in order to increase the amount of high-quality traffic and transport information and data available to drivers and other road users. In order for more advances concepts within automation and artificial intelligence and machine learning to become feasible for the future, the component technologies making up cooperative intelligent transportation systems will need ways and means to communicate amongst themselves in order to provide up to date data and information about traffic and transport situations. Using short-range communications, vehicles are able to communicate data about their positioning, speed, and direction to other connected vehicles and infrastructure, generally at a rate of 10 times per second. With 5G wireless communications systems lingering on the horizon, it’s easy to see why C-ITS will play a large part in the adoption of 5G wireless networks across intelligent transportation systems. Automation will undoubtedly play a huge role in the development and emergence of enhanced and improved intelligent transportation systems. As the Fourth Industrial Revolution drives us into an age of vast swathes of actionable transportation data, automation technologies will be the key to unlocking the value of all this data. Automated vehicular prioritization systems, traffic management, and even autonomous vehicles and driverless cars will come to depend on a sea of vehicular data in order to operate both safely and efficiently. Both edge computing and 5G wireless networks will be required to make this feasible and the progress of elements covered earlier in this article will likely begin to provide clues as to how and when we may begin to see the first iterations of driverless vehicles hit the market. In the final part of this three-part series, we’ll be looking at autonomous vehicles, their benefits, and some of the challenges holding them back as well as detailing how these challenges are being met.
<urn:uuid:a44f6a4b-d7d4-4e87-ab94-4c59f67d8088>
CC-MAIN-2022-40
https://www.lanner-america.com/blog/intelligent-transportation-systems-part-2-edge-computing-5g-change-way-travel/?noamp=mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00499.warc.gz
en
0.943214
865
3.21875
3
A static IP address is an IP address that doesn’t change, while a dynamic IP address will only be active for a certain length of time before expiration. In this article we’ll look at when each type of IP address is used, the differences between the two, and what you need to know as an IT professional or MSP when using these terms and technology. Strap yourself in – we’re about to drop a whole load of knowledge! Quick refresher: What’s an IP, again? IP stands for Internet Protocol, and your IP address is the number that’s given to every device on your network, so that you can tell them apart easily, and so that they can communicate with one another. We use words to look up specific websites for example, but behind the scenes, these words are looked up by the Domain Name System, and then “translated” into the numbers – the IP address of a website. Our devices use an IP address to connect to the internet, but each device might use different IP addresses, depending on where you’re working or browsing from. For example, in your home office, your Internet Service Provider will probably assign an IP address, but if you grab your laptop and head to Starbucks, Starbucks will assign a temporary IP address to your device when you log onto the WiFi. When you get home, it will switch back again. What’s the difference between a static IP address and a dynamic IP address? While a static IP address will remain connected to that device for as long as you maintain the service, a dynamic IP will change when it expires, which is usually every 24 hours, or a multiple of 24 hours. Your static IP address will only change if you make a radical change to your network architecture, or if you’re no longer using that device at all. Static IP addresses are usually assigned by your Internet Service Provider, and are great for situations where you want your IP to remain the same. Examples include: - Remote Access: Your technicians will want to be able to remotely access your device from anywhere, and they do this by using your IP address. If your IP address changes regularly, then you’ll need to set up an auto-update or use an additional technology or integration to ensure you can easily allow remote support where necessary. - Geolocation Services: If you want a website to be able to recognize a device even while the user moves locations, a static IP address can be very useful. Think about dating apps for example where the website relies on recognizing the user and using their location to find nearby matches. A static IP gives more accurate data when location matters. - Server Hosting: Static IPs are most commonly used when hosting web servers, email servers, or any other kind of servers. This makes it easier for users to find the IP address in the Domain Name System. In fact, easier setup and DNS support is another bonus of using a static IP for any business. With all these benefits, you might wonder why anyone would want a dynamic IP. Well, let’s put you out of your misery. The two big reasons for opting for a dynamic IP, are cost, and security. While static IPs are often used for businesses and servers, individual users and consumer equipment usually relies on dynamic IP addresses. These are assigned by the ISP as well, this time by the Dynamic Host Configuration Protocol (DHCP) servers. How are dynamic IPs more secure and cost-effective? With a static IP address, individuals can look up the location of your devices, unless you’re using a VPN. (Hint: use a VPN.) Static IP addresses are definitely easier to hack than dynamic IP addresses, which change randomly and periodically, stopping attackers from tracking devices or targeting them with ease. Dynamic IP addresses are also more affordable. The ISP buys a batch of IP addresses and assigns them randomly, and in most cases the user will have no idea that the IP has changed. They also require no maintenance or ongoing costs. In contrast, ISPs know that if you’re requesting a static IP, it’s important for your business to have one, and so they feel confident that they can charge your business a premium for the privilege. The truth is, most users won’t need a static IP address, but it’s important to understand the difference for the rare cases that you or a client does need that control. How do I know if I have a dynamic or a static IP address? Interested in checking whether you have a dynamic IP address or a static one? Luckily, it’s an easy thing to work out, both on Windows and on Mac devices. On Windows 10, head to the Start bar, and type “Command Prompt” before clicking enter. Once you’ve clicked on Command Prompt, type the command “ipconfig/all” and press enter again. Here, you can search the network information for the words “DHCP Enabled.” There will either be a Yes or a No next to these words. If it says Yes, you are using a dynamic IP address, and if it says No, then the device you’re browsing from has a static IP address. If you’re using Mac OS, you’ll need to head to the System Preferences icon under the Apple menu. Click on Network, and then Advanced. Here there is an option specifically called TCP/IP. Under this item it will say either Manually, or Using DHCP. Similar to Windows above, if it says using DHCP, then your IP address is dynamic, not static.
<urn:uuid:9346dcef-e762-4148-959d-6f38b16a02bf>
CC-MAIN-2022-40
https://www.atera.com/blog/do-i-have-a-static-ip-address-or-a-dynamic-ip-address/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00499.warc.gz
en
0.924634
1,172
2.703125
3
Drones have made enormous progress in recent years. Computer vision technology has been steadily improving; obstacles can be detected and avoided; and autonomous flight systems have lowered the barrier to entry to anyone with a few hundred dollars to spend. All of this has allowed aerial drones and their payloads – ranging from optical cameras to lidar sensors and thermal imaging systems – to begin providing solutions in challenging environments and at scale. But despite the many advances in electric drone technology, the laws of physics remain a problem when it comes to keeping platforms airborne. The heavier a drone is, the more energy it needs to stay aloft, which itself creates a bigger payload: the battery. As a result, manufacturers seeking to build manageable platforms that balance usability with size and power have hit a plateau of sorts, especially with rotary-wing craft. The invaluable ability to take off and land vertically comes with a significant energy cost. Some multi-rotor electric drones can only stay in the air for around half an hour before a new battery/recharging is needed. For beyond visual line or sight (BVLOS) operations, that’s a massive challenge. A new drone battery concept Which is where Impossible Aerospace enters the picture. The California startup has announced its exit from stealth mode, a $9.4 million funding round, and a ready-to-go product: the US-1. This electric drone has a flight time of up to two hours, beyond most of the solutions on the market – aside from hybrid models that rely partly on gasoline. Instead of designing a drone able to fly while carrying a battery, the Impossible Aerospace team has effectively developed a battery that can fly – in their words, a “battery-first approach”. Impossible Aerospace has already sold units equipped with optical and thermal sensors to firefighters, police departments, and search and rescue teams across America. “The US-1 is more than just a drone. It’s the first aircraft designed properly from the ground up to be electric, using existing battery cells without compromise,” said Spencer Gore, CEO of Impossible Aerospace. “It’s not so much an aircraft as it is a flying battery, leveraging an energy source that doubles as its primary structure. This is how electric aircraft must be built if they are to compete with conventional designs and displace petroleum fuels in aviation.” Engineered and assembled in the US Much has been made of the Chinese influence in the commercial and consumer drone industries. Shenzhen-based manufacturer DJI currently has a huge – and deserved – market share. But that has led some to worry that the proliferation of Chinese hardware across America could be a security concern. Just last week at the InterDrone conference in Las Vegas, US-based 3DR partnered, ironically, with another Chinese manufacturer, Yuneec, to launch a commercial drone service for sensitive government operations. “From both a cost and environmental standpoint, the future of aviation is electric,” said Greg Reichow of Eclipse Ventures. “We invested in Impossible Aerospace because of their thoughtful and systematic approach to re-thinking the fundamentals of electric aircraft. “Our first product, the US-1, outperforms existing solutions in a market crying out for reliable, domestically-manufactured long-duration aircraft, while validating the technology required to build aircraft of the future.” Internet of Business says Although the new battery technology from Impossible Aerospace is the headline here, the company has confirmed that every US-1 will be engineered and assembled entirely in the US – hence the name, presumably. That’s good news for any buyers actively seeking domestic alternatives to Chinese-developed drones – especially in a political climate in which tech companies are under pressure to repatriate manufacturing as the trade war rages. However, Impossible Aerospace should bear in mind that it operates in a global market. Identifying products with US patriotism will doubtless play well at home, but it may not be a message that travels over long distances. It’s conceivable that ‘political BVLOS’ may prove to be as big a challenge as keeping drones in the air.
<urn:uuid:7773b058-aa40-495b-8e3f-9dc71287574c>
CC-MAIN-2022-40
https://internetofbusiness.com/impossible-aerospace-drone-startup/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00499.warc.gz
en
0.944345
857
2.625
3
A team of researchers at the University of California has found that altering the signals that cells use to communicate with one another can cause changes to transcriptional outcomes, possibly resulting in the development of tumors. In their paper published in the journal Science, the group describes using optogenetics to carry out extracellular signaling to learn more about its impact on cell proliferation. Walter Kolch and Christina Kiel with University Dublin offer a Perspective piece on the work done by the team in California in the same journal issue. Kolch and Kiel note that signal transduction pathways (STPs) between cells serve to support the conversion of biochemical reactions into predictable outcomes—they even are able to do so in the presence of extraneous noise, which suggests they have some ability to discriminate between different signals. The pair further notes that such discrimination can be enhanced by introducing changes to signaling, such as altering rise time, duration, decay rate and amplitude. Past research efforts have shown, for example, that making such changes to STPs can cause rat pheochromocytoma cells to differentiate or proliferate. But, as they further note, it is still not clear how such signals are encoded and decoded. In this new effort, the team at UoC used a new approach to attempt to decipher STP codes. The new approach involved using a light-controlled mechanism to activate and deactivate guanosine triphosphate (GTP) Ras on demand. GTP is a nucleotide that carries phosphates and pyrophosphates which are involved in directing chemical energy into specific biosynthetic pathways. More specifically, the team used an optogenetic tool that allowed them to turn on or off the expression of a kinase called RAS, or BRAF—a type of protein that is activated by growth factor receptors. In so doing, they found that they were able to affect transcriptional outcomes, which the researchers suggest, could lead to cell proliferation. In addition to learning more about the coding used by STPs, the group also suggests that they have developed a new means for probing responses of signaling networks—a tool that could lead towards a better understanding of biological regulation. More information: L. J. Bugaj et al. Cancer mutations and targeted drugs can disrupt dynamic signal encoding by the Ras-Erk pathway, Science (2018). DOI: 10.1126/science.aao3048
<urn:uuid:1e2fbe06-7292-4b5b-8bd1-45578c53c7a9>
CC-MAIN-2022-40
https://debuglies.com/2018/09/01/optogenetic-profiling-used-to-identify-alterations-in-ras-signaling-dynamics-within-cancer-cells/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00699.warc.gz
en
0.947482
499
3.390625
3
Meat refers to the muscle or organs of an animal consumed as food. It comes from a variety of different animals and is classified as either red or white, depending on the source. An excellent source of protein and several vitamins and minerals, including vitamin B12, niacin and selenium. The omega-3 fatty acids from seafood are good for boosting immunity as well. The protein helps in building and repairing body tissues and improving muscle activity There are nine essential amino acids namely histidine, leucine, lycine, isoleucine, methionine, phenylalanine, threonine, tryptophan, valine, and meat provides all nine and therefore is called a complete protein Iron is one of the key minerals that aids in ensuring proper blood circulation and transport of oxygen to all cells. For more tips, follow our today’s health tip listing.
<urn:uuid:40c6847f-dc35-4595-b46a-0d8e44c3556a>
CC-MAIN-2022-40
https://areflect.com/2019/09/06/todays-health-tip-benefits-of-meat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00699.warc.gz
en
0.925718
188
2.765625
3
Distributed Antenna Systems (DAS) A distributed antenna system or DAS is a network of antennas that are connected to a common source and redistributed to increase cellular coverage throughout a specific area or structure. Distributed antenna systems consolidate all the wireless connections a building will require that include: cellular, emergency bands and Wi-Fi; into a few centralized locations and then route the signals from those connections through a single wireless antenna infrastructure that is deployed throughout the building. A distributed antenna system can be deployed indoors (iDas) or outdoors (oDAS). Rigstar Technicians are highly trained and have extensive experience with installing and supporting a DAS network. Our system has the option to allow for remote monitoring that provides 24/7 visibility into the network, ensuring the system is operating efficiently. If a disruption is detected our NOC will be able to remotely access the system and administer troubleshooting procedures.
<urn:uuid:cbb4c7a7-414c-479e-970f-b0aa54b7dc2b>
CC-MAIN-2022-40
https://www.rigstar.ca/cellular-enhancement/distributed-antenna-system-das/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00699.warc.gz
en
0.944417
184
2.65625
3
Generally, we tend to think that the yellow lighting we are accustomed to is the “warmest” light and one that will simulate sunlight the closest. However, a recent study shows that a color on the other end of the spectrum, blue lighting, helps children stay focused and healthy, especially when exposed during early hours of the school day. In the first post of our Classroom Lighting Matters series, we discussed a study by Liebel et al., which showed that using light with a higher blue content will shrink the diameter of the pupil, leading to higher visual acuity, or the ability to see clearly. The study also showed a direct link between pupil size and reading performance due to increased blue spectrum coloring as opposed to increased brightness. In this post, we’ll explore how a study by Figueiro and Rea proved the benefits of exposure to blue spectrum lighting early in the morning because it prepares the body for a day of activity and stress. How Does Exposure to Blue Spectrum Classroom Lighting Affect Students? Our bodies have an internal clock called circadian rhythms that regulates our sleep, wake and other bodily processes. Our bodies contain an important gland called the pineal gland, which produces melatonin and serotonin, two hormones critical to the sleep cycle. Serotonin is responsible for keeping us alert, awake and aware, while melatonin is the hormone that helps us fall asleep. In order to jumpstart circadian rhythms for the rest of the day, we need exposure to short wavelength lighting, or bluer light. Exposure to the blue-green part of the color spectrum queues the pineal gland to release serotonin, which helps us wake up. In the school year most children are inside during the pivotal morning hours, and most schools do not provide adequate “daylight” lighting inside, so students are not reaching their full potential of performance, health and well-being. Research has shown that students who are not exposed to full spectrum, or daylight, lighting early in the morning test poorer than students who have been exposed. For more information on how classroom lighting affects students’ reading and math test scores, read our next post in the series. Transforming Your Classroom into a Learning Zone Lighting is a critical factor in providing an optimal learning environment. By providing daylight in the classroom with full spectrum light filters, students will be more alert, awake and energetic. Erik Hinds is Vice President of Helping People at Make Great Light. For more information on how fluorescent light filters for classrooms can improve the learning environment, please visit the resource center.
<urn:uuid:6b5c8c49-61d9-4cd3-ae5b-59ee3c12ccd0>
CC-MAIN-2022-40
https://mytechdecisions.com/facility/how-does-classroom-lighting-affect-the-students-part-ii/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00699.warc.gz
en
0.938325
527
3.6875
4
What is IaaS? Infrastructure as a Service (IaaS) is a cloud computing model in which a third-party cloud service provider (CSP) offers virtualized compute resources such as servers, data storage and network equipment on demand over the internet to clients. In the IaaS model, each computing resource is offered as an individual component or service and can be scaled up or down according to the organization’s needs. This significantly reduces or negates the need for physical servers, as well as an on-premises data center, and grants the organization much-needed flexibility to manage variable business needs quickly and cost effectively. While the cloud provider manages the cloud infrastructure itself in an IaaS model, the customer maintains responsibility for all other aspects of operations, including installation, configuration and management of software, applications, middleware and operating systems. The business is also responsible for maintaining security of anything they own or install on the infrastructure. Common Examples of IaaS - Amazon Web Services (AWS) as an on demand computing platform by Amazon - Google Cloud Infrastructure as a computing platform and central key management service by Google - Microsoft Azure as an on demand computing platform by Microsoft - HPE GreenLake as a cloud computing platform by HP IaaS vs PaaS and SaaS Aside from infrastructure as a service, the other two main “as-service” categories of cloud computing services are: - Software as a Service (SaaS): A software delivery model wherein the vendor centrally hosts an application in the cloud that can be accessed by a subscriber. The application does not need to be installed on a device; rather, it is accessed via the internet or an application programming interface (API). - Platform as a Service (PaaS): A platform delivery model that can be purchased and used to develop, run and manage applications. In the cloud platform model, the PaaS solution provider manages both the hardware and software used by application developers. How To Implement IaaS? Most organizations interested in an IaaS implementation follow what is known as a “lift-and-shift migration.” This is when an application or workload is adapted and redeployed in the cloud environment without updating any of the underlying architecture. Implementation can be completed in a public, private or hybrid cloud setting. Customers can then leverage a dashboard or API to access servers, the cloud service portfolio and data storage. Lift and shift migration is generally considered the fastest and least expensive migration option. However, it is generally only appropriate for individual applications or workloads. To enable a truly cloud-native environment, organizations must undergo a full cloud migration process, which is typically far more complex and time-consuming from an IT perspective, and also requires a broader change management plan to ensure proper adoption and management across the organization. Common IaaS Business Use Cases IaaS offers many valuable use cases to organizations, including: - Testing and development: In an IaaS model, organizations can quickly set up and tear down test and development environments using third-party infrastructure to develop applications more quickly and improve time to market. - Business continuity and recovery: IaaS leverages a private or public cloud to back up and store data. In the event of a server failure or error, the workload can be shifted to another server, limiting downtime for the business and eliminating the need for dedicated backup and staff. - Business transformation: IaaS enables organizations to access the processing power needed to collect, store and analyze large data sets and produce real-time insights. This capability is critical for leveraging advanced digital tools and technologies, including AI, ML and automation. - High Performance Computing (HPC): Some organizations have workflows that demand HPC-level computing to properly function. Some of these programs include geological modeling and financial modeling. - Remote workforce: An IaaS model enables employees to access servers remotely and in a geographically dispersed way. This is a necessary component for building and deploying a modern, global workforce, as well as enabling remote work capabilities. What Are the Advantages of IaaS? Many organizations are migrating to an IaaS model because it offers significant cost savings, reduces complexity within the IT environment and enables business transformation. Specific advantages of an IaaS model include: - Reduce or eliminate the need for hardware or on-premises data center, as well as costly infrastructure investments, upgrades and maintenance - Allow organizations to pay only for the services they need through a consumption-based billing model Access and Availability - Establish on-demand access and instant availability of compute resources - Provide continuous remote access of all services and enhanced application performance - Easily scale infrastructure components up or down based on the business’s variable needs - Manage unpredictable surges in demand quickly and efficiently – without a long-term resource commitment - Provide users with the option to shift to a different cloud in the event of a system failure or service disruption, which improves overall system reliability and enhances business continuity - Simplify and streamline backup and recovery system planning, management and updating - Establish guaranteed service-level agreements (SLAs) - Reduce complexity within the IT environment by eliminating or reducing the need for hardware or an on-site data center - Support the reallocation of limited business resources, including IT staff - Enable advanced data and digital technologies, including AI, ML and automation - Quickly set up test and development environments using third-party infrastructure, which helps organizations bring new applications to market faster
<urn:uuid:6f5c70e3-5ad2-478e-a870-23425c70fd06>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/infrastructure-as-a-service-iaas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00699.warc.gz
en
0.922507
1,167
2.546875
3
A NATIONAL CYBERSECURITY AWARENESS: In 2009, President Obama issued the Cyberspace Policy Review, which tasked the Department of Homeland security with creating an ongoing cybersecurity awareness campaign – Stop.Think.Connect.™ – to help Americans understand the risks that come with being online. A NATIONAL CYBERSECURITY AWARENESS – Stop.Think.Connect.™ challenges Americans to be more vigilant about practicing safe online habits and encourages them to view Internet safety as a shared responsibility at home, in the workplace, and in our communities. How do you use the Internet? • What are your main concerns about using the Internet? • Have you ever had your identity stolen? • Do you have antivirus software on your computer and update it on a regular basis? Email, instant messaging, and personal websites now provide easy ways for everyone to stay connected, informed, and involved with family and friends. The Internet also provides an easy way to shop, plan travel, and manage finances. • Many scammers target Americans ages 65 and older via emails and websites for charitable donations, online dating services, online auctions, buyer’s clubs, health insurance, prescription medications, and health care. • Many of the crimes that occur in real life – happen on the Internet too. Credit card fraud and identity theft, embezzlement, and more – all can be and are being done online. • At home, at work, and in the community, our growing use of technology, coupled with increasing cyber threats and risks to our privacy, demands greater security in our online world.
<urn:uuid:c48032d6-d3d2-470a-b8ee-581da3973efc>
CC-MAIN-2022-40
https://cybermaterial.com/a-national-cybersecurity-awareness-campaign-older-americans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00099.warc.gz
en
0.918692
336
3.0625
3
LPWANs (Low Power Wide Area Networks) – The backbone of Industrial IoT? July 20, 2017 LPWAN’s (low power wide area networks) are set to play a pivotal role in the connectivity of devices over the next few years, particularly as we move in to a world dominated by the Internet of Things (IoT), a large number of which need to be low mobility, low power and low cost to succeed. LPWAN is designed for M2M (machine to machine) networks. There are a number of emerging platforms and technologies for LPWAN’s, for example LoRa based, Ultra Narrowband and LTE and LTE-MA. By bridging the gap between existing local wireless and mobile WAN technologies, they will help to provide many industries with the flexibility that current network infrastructure technologies cannot offer. For example, in areas such as logistics and smart-farming, where employees and equipment cover large distances and interact with a multitude of different devices each day; LPWAN’s will offer the connectivity that will provide essential data to improve productivity. They are particularly effective for devices that require low mobility and low levels of data transfer, for example within urban areas, things like alarms and sensors, parking spaces and street lighting and in rural agricultural areas with turbines, soil sensors or weight/mass counters. Because the devices that are used in LPWAN’s require such low levels of power, devices could last up to ten years on a single charge, one of the most appealing benefits. Other key benefits include: - Minimal base stations would be required to provide the coverage - As LPWAN’s are complimentary to existing mobile networks, operators could install the new technology in to the existing infrastructure relatively simply - Very good indoor penetration and coverage - Optimised for low throughput at long or short distances Get all of our latest news sent to your inbox each month.
<urn:uuid:ee8b00ef-dfcb-4048-aac0-8e84ab14d6cf>
CC-MAIN-2022-40
https://www.carritech.com/news/lpwans-low-power-wide-area-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00099.warc.gz
en
0.922052
408
2.703125
3
In the wake of several high-profile data breaches, companies, governments, and cybersecurity experts are calling for a more proactive approach to data protection. Using machine learning and artificial intelligence, cybersecurity experts are detecting identity theft faster and more efficiently than ever before. The state of cybersecurity in 2019 The Equifax hack in 2017 marked the beginning of a new era in data security. The sheer scope of the breach—with over 147.7 million Americans affected—embedded a sense of defeatism in data security. Many Americans have become apathetic to losing the privacy of their personal information, yet identity theft remains a $1.48 billion problem. But artificial intelligence (AI) is starting to change how we look at identity left. Why we need artificial intelligence The more connected we become, the more help we need to keep track of our information. The average email address in the US is connected to 130 individual user accounts across shopping, gaming, business, or banking websites. Any one of those accounts could include personal info like birth dates, credit card numbers, or social security numbers, and any individual company could be the target of a breach. With over 100 accounts per person, there’s far too much personal information floating around the web for any human-controlled security protocol to monitor and protect. Machine learning uses algorithms to track and analyze huge collections of data. These programs “learn” by identifying and encoding patterns found in the data, improving their function over time. An AI-driven cybersecurity algorithm may be the only thing capable of sifting through the petabytes of information (think gigabytes, but in the millions) available on the dark web, the internet’s marketplace for stolen information. AI-powered cybersecurity: how it works Cybersecurity analysts are engaged in a constant arms race against identity thieves, hackers, and bad actors. And this arms race drives innovation on both sides. Staying ahead of threats requires the rapid development of counter-measures, which requires up-to-date and reliable data to create. Cybersecurity teams are increasingly turning to AI to increase the speed and efficacy of their detection processes. The use of AI for this work breaks down to three phases: optimizing the data on the dark web, identifying key information, and developing early warning systems. Phase 1: Sifting through the dark web The dark web’s marketplaces for stolen data are only part of the picture; the dark web also stores millions of benign files that can confuse identity theft detection because they look like false leads. To dig through the dark web, AI algorithms are first trained to sift out these unneeded files. Many of these algorithms have a 99% accuracy rate or higher—and when a false positive does occur, an expert analyst can correct the AI and further refine its accuracy. Phase 2: Threat identification Once an AI system identifies what data is useful, the AI system can make connections between sales or conversations that occur on the dark web and the actual users behind the criminal activity. Marketplaces are used to exchange credit card info and “fullz”—a stolen identity that contains enough information for just about any form of identity theft. These marketplaces can pop up and vanish over just a few months or days, making them difficult to track. As an AI system recognizes patterns in the dark web, it can form the basis of an anti-theft system. The MIT Lincoln Laboratory’s Artificial Intelligence Technology and Systems Group uses an AI that’s trained to link activity on the dark web to individual users by sniffing out miniscule similarities between usernames or the language used to sell the stolen information. Phase 3: Early warning systems A hack of a major company like Equifax or Marriott provides massive amounts of data for cybersecurity experts to examine after the breach, but researchers are beginning to look for patterns before a hack occurs. Researchers at Kroll, a risk management firm specializing in cybersecurity, use AI to detect clusters of activity preceding a major breach. As bad actors begin distributing hacked information on the dark web, these AI algorithms can detect the upsurge in activity, allowing a large organization to protect compromised data before a full breach can happen. Artificial intelligence technologies are not exclusive to cybersecurity firms, though. Machine learning tech is rapidly trickling down into the hands of bad actors who can use AI systems to capture personal information from vulnerable servers. A successful hacker wants to reach the maximum number of victims with minimal effort. This incentive to scale up an attack could drive many thieves to implement AI tools, which make short work out of large-scale problems. AI tools are currently available as open-source software, allowing anyone to download and implement AI in a hacking routine. A motivated individual could find everything they need to train themselves in the use of AI, which makes the emergence of nefarious AI programs inevitable. A hacker might use an AI algorithm to shield their attack by replicating normal server traffic, or they might use it to design an adaptive computer virus that outpaces countermeasures. Benevolent AI systems may also be vulnerable Your identifying information is (more than likely) already in use by an AI. AI algorithms are plentiful in today’s world and are used for everything from your user profiles on music and video streaming sites to your ad profiles on social media. However, even benevolent AI tools are difficult to analyze. While it’s clear that AI algorithms are effective, sometimes how an AI reaches a final decision is largely unknown. Even AI creators can’t often discern how an individual decision gets made by a complex AI—what’s known as a “black box” algorithm. This lack of visibility can make threat detection in an AI-powered system harder to map, and a hacker may soon be able to manipulate an AI’s decision process to reroute your data without human operators perceiving a threat. How to protect your identity Protecting your data in a post-AI world is a difficult problem to solve. The arms race in cybersecurity continues to accelerate, and AI algorithms have a habit of pushing down the gas pedal. The learning aspect of AI tools means they’re constantly improving at astounding rates. You can see the rapid increase by looking at how quickly facial recognition technology or “deepfake” algorithms have improved in the last few years. That said, the scaling problem faced by hackers can give you an advantage when protecting your data. Identity thieves are looking for easy targets, and a few safety precautions could make your identity more trouble than it’s worth: - Step 1: Place a Security Freeze on Your Credit Score. A security freeze is a simple counter-measure that can prevent a would-be identity thief from opening new accounts in your name. You can file for one with Equifax, Experian, and TransUnion, the three major credit bureaus. Once a freeze is in place, your credit score will be given out only to institutions you already have a relationship with, like your bank. - Step 2: Sign Up for Identity Theft Protection. After several high-profile breaches in the last two years, it’s likely that your data is already out on the dark web. An identity theft protection service provides an additional layer of security, helping you respond faster to any unauthorized use of your identity by criminals. These services offer monitoring for your Social Security number and credit score along with insurance against loss in the event of an unavoidable breach. - Step 3: Follow the Security Basics. While the future of digital threats looks more and more complicated, following the basic steps to protect your identity still goes a long way. Use diverse and long passwords, stay away from untrusted websites, and regularly check your credit report. Take advantage of any free monitoring through your bank for suspicious activity to help catch fraudulent charges. You should protect any physical copies of your personal information, too, so keep your wallet or purse in a safe place and shred any sensitive documents when you throw them away.
<urn:uuid:5b29ab91-cb25-4f82-91fd-427db9300a3f>
CC-MAIN-2022-40
https://www.crayondata.com/how-machine-learning-is-changing-identity-theft-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00099.warc.gz
en
0.924959
1,642
3.265625
3
As data centers continue to spring up in China, the government has issued a new set of guidelines intended to cut their environmental impact, and launched a scheme in which 100 pilot projects will lead the industry’s green journey As China’s digital economy grows rapidly, data centers are rolling out one after another, with an estimated 400,000 data centers in the country, consuming 1.5 percent power of the whole emerging economy. These data centers are falling behind international standards for energy efficiency, and face dual pressures on power consumption and environmental pollution by data centers, To address this, three government bodies have got together to set up a scheme to get best practices implemented in a set of prototype data centers, that will lead the rest of the industry onwards. China’s Ministry of Industry and Information Technology(MIIT), National Government Offices Administration, and National Energy Administration have issued a document called Guideline for Pilot Projects of Green Data Centers (GPPGDC), which sets out how to implement the energy saving advice of the State Council. It also demonstrates the government’s determination to enhance energy efficiency and environmental friendliness of data centers. Necessity and goals The Guideline has hit on the pilot project idea as a way to move China’s data centers towards greater energy efficiency - because at present they lag the best practice in the rest of the world. Currently the average PUE of data centers in the US is 1.9, when advanced data centers can achieve a PUE lower than 1.2. Despite this the PUE of most of data centers in China is around 2.2, which represents a huge gap with international best practices. Moreover, such data centers emit huge amount of greenhouse gases and consume large quantities of water, posing great challenges to both resources and the environment. The Guideline has been put together with reference to environmental standards from more develolped markets. These include the US-based Data Center Energy Star Program, and the US Government’s Federal Data Center Consolidation Initiative implemented by the US government. The guide is also aware of the European Code of Conduct for Energy Efficiency in Data Centers implemented by the European Unioni, and the industry standards and best practices promoted by the Green Grid. By 2017, the GPPGDC plan calls for endorse 100 leading green data centers to be operating as pilot projects, covering key industries including manufacturing, energy, internet, public institutions and finances. These projects must display technology innovations and help develop standards, promoting their experience to help guide the whole data center industry on a green journey towards low carbon emissions and low energy consumption. The scheme is looking for an eight percent improvement in efficiency ratings. Alongside this, four national standards on green data centers will be rolled out, 40 advanced technologies, products and best practices of operating and maintaining green data centers will be promoted; and the Guideline for Building Green Data Centers will be formulated. Requirements for Pilot Projects The pilot projects must comply with Guidelines for Locating Data Centers, and must be completed and brought online before February 31st, 2016 They must also make greater use of green and smart servers and energy management information systems, and use energy-saving and environmental friendly technologies including waste heat recycling, fresh air cooling, distributed energy supply, and high voltage direct current. They are also asked to develop proper operation and maintenance mechanisms for green data centers, and create coordination mechanisms between the management and technical teams. According to the guidelines, the pilot projects have to monitor and collect data on energy efficiency and environmental friendliness on a regular basis, and such data will be reviewed by relevant provincial authorities before it is submitted to the Ministry of Industry and Information Technology. Then a relevant institution entrusted by MIIT will monitor such pilot projects for one year in accordance with an as-yet-unpublished set of National Green Data Center Evaluation Criteria. Projects meeting the Criteria will be posted on MIIT’s website as green data centers; any that are disqualified will be removed from the list. The guideline also calls for a set of third party testing, evaluation and consulting agencies, as well as specialist products and service providers. Companies involved in the pilot projects are encouraged to expand contracted energy management cooperation with energy-saving service providers, to study emission trading mechanisms, and to find new financing models for green data centers. GPPGDC is just one of the Chinese government’s guidelines for the data center industry. In January, 2013, MIIT, together with the National Energy Administration, announced Guidelines for Locating Data Centers. In February, 2013, MIIT issued Guidelines for Promoting Energy Saving and Emission Reduction of the Communication Industry. Moreover, non-government organizations have also launched programs for grading the “greenness”of data centers. Edited by Peter Judge
<urn:uuid:c4243128-843c-4c33-9ae4-fc838b17068f>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/news/chinese-government-calls-for-green-data-center-catch-up/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00099.warc.gz
en
0.925292
978
3.015625
3
Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right. Current practices are not aligned to sustainable technology The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments: - Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption. - Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all. - Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption. The role of cloud modernization in building a sustainable technology ecosystem In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year. Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned. Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed. Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available. Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on. Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform. When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission. This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected].
<urn:uuid:cd482e93-e8bc-40ba-8cf2-c0f5090c5650>
CC-MAIN-2022-40
https://www.everestgrp.com/2020-12-https-www-everestgrp-com-2020-12-sustainable-business-needs-sustainable-technology-can-cloud-modernization-help-blog-html-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00099.warc.gz
en
0.948804
1,302
2.578125
3
Infrastructure as a Service, or IaaS, is one of the main three fundamental service delivery methods for cloud computing, alongside Platform as a Service and Software as a Service. It allows you to access compute, storage, networking, servers and virtualisation via the internet from a third party provider in order to create the IT environment that best fits your business requirements. It is also one of the fastest-growing categories of cloud computing because of the growing number of enterprises that are using IaaS to make the shift from being solely on premise. IaaS refers to the pool of physical or virtual infrastructure that a cloud provider will deliver for an organisation from its data centres. The components that IaaS provides to a customer include the data centres, the servers, storage, networking as well as the virtualisation or hypervisor layer if required. The customer just decides how much resource it requires and orders it accordingly. Typically IaaS will also include the management of the infrastructure but can also include other services such as IP address, network connections, load balancers, billing management, access management as well as storage resiliency, backup and replication. These resources allow the users to install operating systems, deploy databases, create storage and install workloads as they require. IaaS tends to be charged on a pay-as-you-go hourly, weekly or monthly basis. The availability of the service and the details of the way the infrastructure is configured can is very much dependent on the IaaS provider. IaaS, PaaS and SaaS: What’s the Difference? Platform as a Service (PaaS) is the next step up from IaaS, where the provider also supplies the operating environment including the operating system, application services, middleware and other ‘runtimes’ for cloud users. It’s used for development environments where the business can focus on creating an app but wants someone else to maintain the deployment platform. It means you have much simpler workloads but you can’t necessarily be as flexible as you want. At the highest level of orchestration is Software as a Service (SaaS) where applications are accessed on demand. Here you just open your browser and go, consuming software rather than installing and running it. A user simply logs on to access the provider’s application. Users can decide how the app will work but pretty much everything else is the responsibility of the software provider. Benefits of IaaS So why would you choose IaaS? IaaS allows you to significantly reduce the cost of your IT infrastructure because there is no need for you to purchase your own data centre hardware or monitor and maintain your own equipment. You hand responsibility for the maintenance of the infrastructure to the IaaS provider whose job it is to ensure its availability. The main benefits of IaaS are: - Cost effective - Pay-as-you-go OPEX model - High degree of control over resources - Scalable capacity to meet changing demand - Access services as you need them and from anywhere - Secure data centres - Redundant and resilient – no single point of failure - 24/7/365 availability - Develop and get services to market quickly The drawback to IaaS centres on your ability to manage it. If there are issues with the network or downtime, workloads will be affected so it can be more difficult to manage and monitor. It is therefore essential to have staff who understand it and can manage the relationship with the IaaS provider. Example Cases for IaaS There are a many different circumstances in which you would use IaaS. It is particularly useful for creating and testing new workloads. If an organisation is creating a new application it might be cheaper to host and test it with an IaaS provider. Once it’s been tested successfully the workload can then be brought in house or kept on a long-term deployment. By using IaaS a business can use a third party to host and scale their IT systems as they grow. They can host websites on pooled virtual resources from their provider’s physical servers or create a virtualised network of interconnected servers. Drivers for choosing IaaS include: - Test and development - Website hosting - Storage, backup and recovery - Web application hosting - High performance computing - Big Data The major IaaS providers are AWS, Microsoft Azure and Google Cloud. However there are other suppliers, such as iomart, that offer all the benefits of IaaS from these hyper vendors along with additional options that are more dedicated to a single tenant who wants the security of knowing that they are being hosted in UK data centres. All offer a broad range of services to support the most complicated of hosting environments. iomart offers a variety of IaaS solutions which cover the differing requirements of the modern business: Public Cloud – the multi-tenant option via iomart’s own public cloud or the managed clouds of AWS, Microsoft Azure and Google Cloud. Private Cloud – a secure single tenant solution for more sensitive data. Hybrid Cloud – a combination of public and private cloud for scalable, high performance infrastructure. Dedicated Servers (Bare metal) – the option for single tenants with business critical operations who have strict regulatory compliance. Find out more about IaaS from iomart
<urn:uuid:2fec1868-b493-4b56-b80c-c39567a7dbd7>
CC-MAIN-2022-40
https://blog.iomart.com/what-is-iaas-infrastructure-as-a-service-and-how-does-it-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00099.warc.gz
en
0.93807
1,125
2.875
3
Overfitting: What to Do When Your Model Is Synced Too Closely to Your Training Data The main objective for many data scientists is to build machine learning models that predict the outcomes on unseen data that weren’t used in the development process. The performance of a model on unseen data is referred to as its ability to generalize and is ultimately how it will be judged. If generalization does not meet expectations, the result will be poor outcomes and possibly a reduction of stakeholder confidence in machine learning. There are a number of reasons that a model may fail to generalize, but one common culprit is something called overfitting, which will be the focus of this blog post. The DataRobot AI Wiki defines overfitting as a model that is “too attuned to the data on which it was trained and therefore loses its applicability to any other dataset.” However, upon first read, this definition may not be clear enough, so I’ll use a scenario from everyday life to provide some intuition. Consider a student who has an upcoming exam and her teacher was kind enough to provide some study materials, including practice questions with answers. Our student pores over what the teacher has provided, memorizing each detail. When it came time to take the exam, the student copied the provided answers down faithfully and was shocked when she received a bad score. This is overfitting. How Does Overfitting Occur? In the example above, a poor test grade was the outcome of overfitting, but with a real-world machine learning problem, such as predicting if a loan will default, there could be very costly consequences. Therefore, it is crucial to take steps that reduce the risk of overfitting. However, before we dive into methods to control overfitting, it is important to understand how it happens in the first place. What causes overfitting? Overfitting occurs when a model’s parameters and hyperparameters are optimized to get the best possible performance on the training data. By optimizing for error on the training data and ignoring how a model will perform outside of that sample, a machine learning model will have excellent performance on the training data and poor performance on new data. For example, a decision tree can achieve perfect training performance by allowing an infinite number of splits (a hyperparameter). However, this decision tree would perform poorly when supplied with new, unseen data. How to control for overfitting Use a validation dataset The validation dataset is an additional partition created that can be used to help make modeling decisions like setting hyperparameters. Imagine that the figure above represents all of your data split into three sections; the blue section is used for training, the red section is your holdout (which isn’t touched until all modeling decisions are made), and the green section is the validation set. The validation dataset can be used to evaluate a model’s performance with a given set of hyperparameters. If performance is poor on the validation set, it’s worth evaluating the model and making changes before retesting. Cross-validation is useful for selecting hyperparameters and is done by splitting the training data into N different partitions, called folds, for training and evaluation. For example, for fivefold cross-validation, split the data into five partitions, train on four, and test on the remaining single partition. Do this five times with a different partition selected for testing each time. This process will result in five scores for one round of fivefold cross-validation. The scores can be averaged to give an overall idea of performance. Do this process for each set of hyperparameters and select the best combination for testing on the validation dataset. Ensemble models combine many models together, typically simple models called weak learners, and aggregate the output predictions to create one final prediction. Ensemble models work well when the sub-models are diverse – their outputs have low correlation. Ensemble models can be very robust to overfitting and tend to generalize well. For linear models, very large coefficients can be a sign of overfitting. Regularization is a technique that penalizes the output of linear models that have large coefficients. Popular techniques include ridge and lasso regression. There is no substitute for domain knowledge when building machine learning models. Identifying target leakage often requires deep knowledge of the data being used, so understanding the data and what it represents is crucial to avoiding overfitting. Overfitting is something to be careful of when building predictive models and is a mistake that is commonly made by both inexperienced and experienced data scientists. In this blog post, I’ve outlined a few techniques that can help you reduce the risk of overfitting. The DataRobot automated machine learning platform takes advantage of the best practices for reducing overfitting. Additionally, transparency tools like Feature Impact will aid in the identification of target leakage and provide the information needed to have full confidence in the models being implemented.
<urn:uuid:7f3659ee-131b-434c-988f-c2004c89a99b>
CC-MAIN-2022-40
https://www.datarobot.com/blog/overfitting-what-to-do-when-your-model-is-synced-too-closely-to-your-training-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00099.warc.gz
en
0.941349
1,021
2.578125
3