text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
lazyllama - Fotolia The European Union (EU) has launched strategies for artificial intelligence (AI) and the “data economy”, with ethics and transparency as watchwords. It has also presented what it calls the “human-centric development of AI” as critical to “fighting climate change”. The European Commission (EC) said in a statement that it favours a “European society powered by digital solutions that put people first, open up new opportunities for businesses, and boost the development of trustworthy technology to foster an open and democratic society and a vibrant and sustainable economy”. Ursula von der Leyen, president of the EC, said: “Today we are presenting our ambition to shape Europe’s digital future. It covers everything from cyber security to critical infrastructures, digital education to skills, democracy to media. I want that digital Europe to reflect the best of Europe – open, fair, diverse, democratic and confident.” Margrethe Vestager, executive vice-president for “A Europe fit for the digital age”, said in the same statement: “We want every citizen, every employee, every business to stand a fair chance to reap the benefits of digitisation, whether that means driving more safely or polluting less thanks to connected cars, or even saving lives with AI-driven medical imagery that allows doctors to detect diseases earlier than ever before.” And Thierry Breto, commissioner for the internal market, added: “Our society is generating a huge wave of industrial and public data, which will transform the way we produce, consume and live. I want European businesses and our many SMEs [small and medium-sized enterprises] to access this data and create value for Europeans – including by developing AI applications. Europe has everything it takes to lead the ‘big data' race, and preserve its technological sovereignty, industrial leadership and economic competitiveness to the benefit of European consumers.” The EC has said it will “focus on three key objectives in digital: technology that works for people, a fair and competitive economy, and an open, democratic and sustainable society”. It added: “Europe has all it needs to become a world leader in AI systems that can be safely used and applied. We have excellent research centres, secure digital systems and a robust position in robotics, as well as competitive manufacturing and services sectors, spanning from automotive to energy, from healthcare to agriculture.” The statement emphasised trust as vital to AI, saying: “As AI systems can be complex and bear significant risks in certain contexts, building trust is essential. Clear rules need to address high-risk AI systems without putting too much burden on less risky ones. Strict EU rules for consumer protection, to address unfair commercial practices and to protect personal data and privacy, continue to apply.” As regards high-risk AI scenarios, the EC continued: “For high-risk cases, such as in health, policing or transport, AI systems should be transparent, traceable and guarantee human oversight. Authorities should be able to test and certify the data used by algorithms as they check cosmetics, cars or toys.” To this assertion that AI should be explainable, the EC added caveats about the need to guard against bias in the datasets used to create AI systems and urged caution in the use of facial recognition technology. “Unbiased data is needed to train high-risk systems to perform properly, and to ensure respect of fundamental rights, in particular non-discrimination,” it said. “While today, the use of facial recognition for remote biometric identification is generally prohibited and can only be used in exceptional, duly justified and proportionate cases, subject to safeguards and based on EU or national law, the commission wants to launch a broad debate about which circumstances, if any, might justify such exceptions.” Read more about the EU and AI - European Commission unveils €1.5bn AI funding. - EC publishes artificial intelligence guidelines. - Dominic Raab: UK AI sector will get boost from EU exit ‘done right’. The EC allied its AI strategy announcement with a declaration about Europe’s “data economy”. It again asserted: “Europe has everything it takes to become a leader in this new data economy: the strongest industrial base of the world, with SMEs being a vital part of the industrial fabric; the technologies; the skills; and now also a clear vision.” The commission said it is aiming at “setting up a true European data space, a single market for data, to unlock unused data, allowing it to flow freely within the EU and across sectors for the benefit of businesses, researchers and public administrations”. It added: “Citizens, businesses and organisations should be empowered to make better decisions based on insights gleaned from non-personal data. That data should be available to all, whether public or private, startup or giant.” To do this, the EC said it will first set up a “regulatory framework regarding data governance, access and reuse between businesses, between businesses and government, and within administrations”, adding that it “means to make public sector data more widely available by opening up high-value datasets across the EU and allowing their reuse to innovate on top”. It also said it will launch “sectoral-specific actions, to build European data spaces in, for instanc,e industrial manufacturing, the green deal, mobility or health”. The White paper on artificial intelligence is open for public consultation until 19 May 2020. Bob De Caux, vice-president, AI and RPA (robotic process automation) at Sweden-based IFS, said of the EU statement: “By choosing to focus their new AI rules on ethics and transparency, the EU is positioning its AI vision in a way that can help a broad range of established businesses rather than just startups, while differentiating itself from the approaches of the US and China. “Many European SMEs in industries such as manufacturing have had difficulty in scaling AI, not due to a lack of cutting-edge innovation, but due to the change in mindset and processes required to fully embrace a new approach. While it is true that a trust-based approach is more of a narrative than a business model, it is crucial to these businesses overcoming some of the misapprehensions and hype around AI that have built up over the past few years, and demonstrating that although it is not a magic bullet, it can drive real value when implemented properly. “In addition, concentrating on transparency does not have to be an innovation killer. Black box approaches such as deep learning are not always appropriate or even necessary for many of the problems facing businesses, so it is encouraging to see a focus on research combining machine learning algorithms with more classic symbolic, human-understandable approaches, which will be very important in industries requiring human oversight, such as healthcare.”
<urn:uuid:5840b391-446d-4023-92f0-0f255d45f114>
CC-MAIN-2022-40
https://www.computerweekly.com/news/252478825/EU-puts-up-AI-and-data-strategies-with-an-ethical-twist
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00148.warc.gz
en
0.93595
1,458
2.625
3
The Internet of Things (IoT) evolved from a lofty concept to a prevalent reality in a relatively short amount of time. IoT technology brings intelligence to objects and devices that are connected to the internet to alter virtually every aspect of our personal and business activities. From connected cars and smart thermostats to microwaves capable of measuring the weight of food and cooking it accordingly, the Internet of Things saves us time and money while making our lives more convenient and enjoyable. Smart cities, such as Amsterdam, Dubai and Singapore, have taken these concepts further by utilizing IoT to radically improve the management of traffic, energy usage, water consumption and more, beautifully illustrating the incredible possibilities that this technology brings to our world. IoT’s Impact on the Music Industry According to Gartner predictions, there will be 20 billion connected things by the year 2020. IoT’s reach is far and wide, resulting in a dramatic impact on business operations and creative endeavors in the music industry. When considering IoT’s applications in music, most people will think of smart speakers, wearables and smart home devices used for streaming songs. However, while these advancements are undoubtedly significant, the music industry is embracing IoT technologies to a much greater degree, with innovative applications being discussed and developed at a rapid pace. The concept of the Internet of Musical Things (IoMusT) is upon us, showing the wonderful impact this technology can have on our industry. Practical Applications of the Internet of Musical Things The IoMusT carries the potential to contribute to the music industry in a multitude of ways, including smart instruments, performance and recording enhancement, assistance in composing, advanced streaming recommendations and much more. Prizm is a connected device with one aim: picking the perfect music to play for you in every scenario. The technology takes a person’s interactions with the device and uses this data to learn what music they prefer in different contexts. Prizm is even able to recognize the people in the room and sense their mood to allow the device to alter the music selection to suit each particular situation. For musicians or other music enthusiasts who are drawn to new technologies, Arterfacts offers a smart musical instrument connected to IoT. The Liverpool-based startup’s innovation enables people to play various instruments , such as the keyboard, percussion, wind and string , either through an attached touchscreen and mouthpiece or through movement. This amazing device can be configured differently to match the way each physical instrument would be played, thereby mimicking the real thing without the need to make additional purchases. Thanks to Music: Not Impossible (M:NI), deaf music fans can feel vibrations through their skin that provide what the company refers to as a true “surround body” experience. The collaboration between Avnet and Not Impossible Labs developed a wireless wearable system that creates zero-latency vibrations, thereby allowing the user to attend a live concert with the feelings being perfectly in sync with the music being played. Although remote recording isn’t brand new, IoT solutions offer incredible advancements to this concept. Ohm Studio is a music production application dedicated to providing musicians with the ability to make music over the internet, allowing, for example, a band to record a song together while they are physically in different studios in various locations around the world. Concerns and Threats Unfortunately, as is the case with everything on the internet, IoT carries the risk of privacy and security vulnerabilities. As we all know from the news, hackers can cause terrible havoc when data is accessed by the wrong people, leading to the loss of privacy, identity theft, intellectual property infringement and many other damaging results. The Internet of Things, and therefore the Internet of Musical Things, generates enormous volumes of data, and this mountain of information is growing exponentially. The importance of implementing comprehensive security measures cannot be overstated, and fortunately, experts in the field are working diligently to reduce the potential for data privacy breaches to stay ahead of those who attempt to cause harm. The Internet of Musical Things will continue to amaze and dazzle us with its potential, as a growing number of startups and other industry players jump on board to take this technology to its absolute limits. And I’m thrilled to be on the ride to see what’s next! What are your thoughts about the Internet of Musical Things? Please share your opinions in the comments below.
<urn:uuid:87b4d08a-f001-4ed9-a9fa-0a8c5edbae19>
CC-MAIN-2022-40
https://www.iotforall.com/iot-applications-music-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00148.warc.gz
en
0.936169
894
2.625
3
One of the most important components in a paging system is the speakers. So before we define any paging system, we start by determining the amount of sound required at each location. Once we know the type of sound required, we can select the right speaker. After we have defined the speakers, we can determine the right amplifier to use. What’s the best speaker for you? Well it depends on a number of things. For example, do you want to hear music or just voice paging? The type of speaker also depends on how large an area you want to cover with sound. The background noise and where you want to place the speaker is also important. It may be necessary to select a different speaker for each location. This article reviews how to select the right speaker for the job. We will review the sound output which is measured in decibels (dB), the sound quality (or frequency range), and the power required at the speaker. Types of Speakers There are many types of speakers available such as ceiling speakers, wall speakers, and horns. They are also available with direct (8-ohm) or transformer type (25 V or 70 V) inputs. You can only connect up to four 8-ohm speakers to one of the 8-ohm outputs of an amplifier. A 70-volt speaker system has a transformer inside each speaker, as well as a single transformer at the output of the amplifier. You can daisy-chain many 70-volt speakers to a single 70-volt amplifier output. For example, you can attach about eight 70-volt speakers to a single 70-volt amplifier, with 40-watt power output. The IP7-SS40 is an example of a 40-watt, network-attached IP amplifier. The more speakers you attach the lower the power available to each speaker, and thus the lower the maximum sound output level from each speaker. Power and Sound Output Speakers are rated at a specific sound level (measured in dB), at a specific power level (1 Watt), and at a specific distance away from the speaker (1M). DB or decibel is the measure of sound level output. A Watt is the electric power that comes from the amplifier. For example, 60dB is the sound level of normal conversation. 120dB is the sound of amplified rock concert, and by the way experts recommend using earphones when the sound level is continuously over 85dB. For more about sound levels take a look at our other article, “How Loud is Loud”. To increase the volume, we increase the audio power to the speaker. The rule of thumb is that every time we double the power, we get an additional 3dB of sound out. So if we select a speaker that has a specification of 94 dB @ 1W/1M, and we double the power to 2 watts, we increase the sound level to 97dB, and if we double the power again (4 watts), we get another 3dB or 100dB of sound. Let’s take a look at an actual speaker. Suppose we select the PH20T horn from Penton, and mount it on the side of the building. The specification says we get 105 dB @ 1W/1M. It also says that this speaker is rated at 40Watts maximum. This means we can power the speaker with up to 20W without damaging it. We can use an IP amplifier like the IP7-SS40 to provide the audio power. We attach the amplifier directly to the network and the speaker to the 8-ohm output of this small amplifier. A separate power 24 VAC adapter is also needed. At the maximum power, we will get about 118 dB of sound. This is the sound level at 1M from the speaker and it’s pretty loud so don’t stand too close to it. Sound Level and Distance from Speaker The further away you are from the speaker the less sound you hear. The sound level goes down 6dB every time we double the distance from the speaker. For example, a speaker that provides 118 dB of sound at 1M will have a sound level of 112 dB at 2 M from the speaker. At 4 m (13 ft.), the sound is 106 dB; at 8 m (26 ft.), it’s 100dB; at 16 m (52.5 ft.); and at 32 m (105 ft.), it’s 96dB, etc. Just for your reference, 96 dB is the sound of a power mower. So you can see that this speaker and amplifier would provide enough sound to allow children playing in a large yard, to hear an announcement. Speakers are also rated by the angle of coverage. The coverage is measured in angle. If you are directly in front of the speaker it will sound much louder than if you are off to the side. For example, the Penton PBC6T mounts on the wall and it can best be heard at an angle within 115 degrees. If the speaker is mounted on the ceiling, you would like to select one that has as wide a coverage area as possible. Sound Quality or Frequency response Just like your home stereo system, the wider the frequency range of the speaker the better the sound quality will be. The specifications will tell you the range of frequency response for the speaker. If you are playing music, you want a frequency range that’s as wide as possible. Better speakers will cost more, so you have to first determine what you want to hear. For example, the Penton LIS8T72 ceiling speaker has a very good frequency response of 60 to 20,000 Hz so it’s great for both music and voice paging. Other speakers have frequency ranges of 300 Hz to 15 KHz, which is just fine for listening to a person talking. Where they are placed There are a variety of speakers available, some mount in drop ceilings, others go on the walls. There are those you can use outdoors and others that are only used indoors. As we discussed the sound level changes the further away you are and the further off centered you are form the speaker. If you place a speaker in the ceiling it will sound very good when you are close to it. The higher the ceiling the more dispersion you will get. So if you have a room with 10 ft. ceilings you may be able to hear the speaker when you are about 5 ft. off center, but if you have a ceiling that’s 20 ft. tall you can be 10 ft. away from the center of the speaker. If you have a drop ceiling and the room is less than 30 ft. by 30 ft. in size, you can usually use one of the Penton LIS8T72 ceiling speakers or the Quam System 5/70 speaker system. These are easy to install because they just replace a ceiling tile. If you want to save money, use the small round speakers that require a cut out of the ceiling tile such as the Quam C5 or Penton RCS ceiling speakers. If you have a long hallway, there are dual speakers that send the sound in two different directions. You can select the Cell10BDT or the Quam System 11 bi-directional speakers. Outdoors you can use a weatherproof horn type speaker such as the Penton PH20T or the Quam QH16T paging horn. The powered speaker is the easiest type of speaker to install, but they cost more. It includes a built in amplifier. The SPKR-11-BD-P, from Digital Acoustics, is an example of a bidirectional speaker that attaches directly to the network. The speaker includes a built in 8-watt amplifier. It’s very easy to install, since all you need is a network connection that includes PoE. The IP powered speaker includes software (for your PC) that allows you to send pages or music right from your computer. For more about network attached paging systems take a look at Paging Over IP Systems. The best speaker for your IP paging system depends on where and how you want to use it. You can select the right speaker by looking at the speaker specifications. It describes the power required to achieve a specific sound level. It also shows the frequency response and angle of sound dispersal. You may also select the speaker for the ease of installation or even how it looks. If you need help selecting the right speaker, just contact us at 1-800-431-1658 or 914-944-3425 or use our contact form.
<urn:uuid:03b78dd0-ff13-454c-be7d-8b5bb51c7c39>
CC-MAIN-2022-40
https://kintronics.com/selecting-the-right-speakers-for-your-ip-paging-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00148.warc.gz
en
0.927282
1,785
2.71875
3
The increasing proliferation of wireless networks in businesses, public places, and private homes, along with the widespread use of smartphones, tablets, computers, and IoT devices, has resulted in a vastly increased attack surface for malicious actors. Security in both business and non-business environments is essential for the protection of valuable data and personal information. While businesses and organizations invest significantly in wireless network security, security in-home wireless networks are often not considered. Both business and home networks face the same risks related to wireless networks. Some of these risks include piggybacking, wardriving, evil twin attacks, wireless sniffing, unauthorized computer access, shoulder surfing, and theft of mobile devices (Securing wireless networks, n.d). Some general best practices security concepts for wireless networks include strong passwords policies, encryption, the use of appropriately configured firewalls, restriction of access using Mac Address Filtering (Cisco), and ensuring that software is up to date. All wireless networks should be secured by effective encryption standards. Older versions of wireless encryption such as WEP and WPA should not be used because they are easily hacked using widely available key cracking tools. Both home and business wireless networks should use WPA2 or WPA3 encryption to secure their data. WPA2 uses strong Advanced Encryption Standard (AES) encryption and effectively protects data transmitted over wireless networks. However, WPA2 can be vulnerable to password attacks such a Dictionary Attacks and Password List attacks. Dictionary attacks use automated software to quickly try thousands of common passwords to access the wireless network. Password List attacks are similar to Dictionary Attacks, but they use lists of common passwords available on the Dark Web. WPA3 is the latest developed standard for wireless encryption (Wireless security protocols, n.d.). WPA3 also uses AES encryption and has protections that prevent Dictionary and Password List attacks. Wireless piggybacking is a wireless attack that can be mitigated using encryption. Piggybacking is when unauthorized users connect to the wireless network. This real-world threat can occur when the network is not adequately secured using a robust encryption standard such as WPA2/WPA3. Piggybacking often occurs when a person uses a neighbor’s Wi-Fi without permission or parks outside a business location to connect to the business’s wireless network without permission. Encryption must be paired with a strong password to ensure effectiveness. The use of strong passwords can be an inconvenience to users. Therefore, users often create passwords that are composed of simple words that are easy to remember. These easy-to-remember passwords are also easy to crack using tools such as Aircrack-ng and BoopSuite. Therefore, strong wireless passwords should be used for both business and home networks. A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules (What is a firewall, n.d.). There are two categories of firewalls: software firewalls and hardware firewalls. Software firewalls is a program that is installed on a computer that inspects and filters data that may be malicious. Hardware firewalls are separate devices that inspect and filter data before it gets to the network. Firewalls can be either stateful or stateless. Stateful firewalls scrutinize multiple aspects of network traffic, including the context of the traffic. These firewalls analyze the communication channels and characteristics of the data to determine what traffic is permitted. Stateless firewalls, on the other hand, inspect the packets alone without considering the context. Stateless firewalls are generally less expensive and are faster than stateful firewalls. Firewalls on wireless networks can help prevent attacks such as malware and viruses by stopping this malicious traffic before it enters the network or device. Firewalls should also be deployed on mobile devices such as phones. Attacks in which other devices attempt to connect to a phone or mobile device can be thwarted with a properly configured mobile firewall. Restrict Wireless Access using MAC Address Filtering Access to wireless networks can be restricted through the use of MAC address filtering. Since every device has a MAC address, the network can be configured only to allow connections from specifically authorized devices. MAC address filtering enables the organizations to allow connections from devices that meet required security requirements and pre-screen for malware or viruses threats. Organizations may even choose to enable company-owned devices and prevent personally owned devices from connecting to the network. Restrictions such a these can be a powerful method to reduce the attack surface of a wireless network. Wireless Network Design The wireless network should be designed to limit the ability to access the network from outside an organization’s workspace. Wireless networks must meet the users’ needs but can also be configured to restrict the ability of intruders to gain access to the wireless signal. This can be accomplished by positioning the wireless access points in the center of the building or strategic locations within the workspace and adjusting the signal strength so that the wireless signal does not reach outside the building. The Service Set Identifier (SSID) is the broadcasted name of the wireless network. It is common for manufacturers to use the same SSID for all wireless routers that they produce. Therefore, it is essential to change the default SSID so that the router manufacturer is not disclosed. SSID broadcasting can be disabled so that the network is not discoverable. This can be helpful because it will prevent the causal user from attempting to connect to the network. However, disabling the SSID is not a real security measure because it does nothing more than hiding the network name. The network is still easily discovered using Kismet or other programs that look for available networks without SSID broadcasts. Securing wireless networks. (n.d.). Cybersecurity & Infrastructure Security Agency. Retrieved October 25, 2021 from https://us-cert.cisa.gov/ncas/tips/ST05-003 Wireless security protocols, (n.d). Cisco. Retrieved October 25, 2021 from https://ipcisco.com/lesson/wireless-security-protocols/ What is a firewall?. (n.d.). Cisco. Retrieved October 25, 2021 from https://www.cisco.com/c/en/us/products/security/firewalls/what-is-a-firewall.html
<urn:uuid:565c9307-85d6-4361-bf28-beccf76bdac7>
CC-MAIN-2022-40
https://cyberexperts.com/wireless-network-security-considerations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00349.warc.gz
en
0.921217
1,324
3.1875
3
Social engineering is the art of manipulating people into performing actions or exposing confidential information in order to gather information for fraudulent purposes or gain unlawful access to computer systems – this deals with data and assets of a dead person and the struggle ‘over my dead body.’ HALOCK investigated a case where an employee had died out of state and an IT staff member was instructed to retrieve that employee’s laptop from the neighboring state and bring it directly back to the corporate headquarters. While heading back from the decedent’s home after picking up the laptop, the IT Staff Member was met by a barrage of calls from former co-workers of the departed, who were also close friends. They expressed their grief at the person’s passing and noted that they wished to retrieve some personal information from the laptop – like old pictures, etc. as they knew that once the laptop made it back to the corporate office, all of their precious memories with their former coworker would be lost forever. The IT Staff Member knew these former coworkers, since they all used to work at the same company together, but they had since left to go work for a competitor. Since the IT Staffer knew these individuals, and trusted them, the decision was made to let the former coworkers/friends retrieve the contents of the laptop before returning to the headquarters. The IT Staffer proceeded to allow the former coworkers to use USB drives to capture data from their deceased friend’s laptop. While the IT Staffer was waiting, the staffer felt uneasy about the situation and debated about whether or not to tell anyone. Once the “friends” of the deceased were finished, the IT Staffer continued back to the office, as originally instructed and delivered the laptop to executive management. The IT Staffer was in the end concerned that the former coworker had taken more than personal artifacts from the laptop and reported the encounter with management. HALOCK was called in to conduct a computer forensics examination to see exactly what was taken off of the deceased employee’s laptop. Instead of great vacation photos or music files, the forensics team found that the former employees had used the USB drives to copy all corporate intellectual property. This was a clear case of using social engineering to perform corporate espionage. It was a unique case because the social engineers (the former co-workers) manipulated a grieving employee (the IT Staffer) over the death of a former employee. This was the first time that HALOCK had seen grieving used by the social engineers. So how did it all end? Unfortunately, but necessarily, the IT Staffer was fired for not protecting company assets. Legal action was brought against the former coworkers who stole the intellectual property. The company that fell victim to this scam instituted better education and training for its IT staff to prevent a future security incident from happening. The organization also invested in annual testing of its staffer’s cyber security awareness.
<urn:uuid:0adc0a9b-d56c-4277-9552-c04c43ba63e1>
CC-MAIN-2022-40
https://www.halock.com/over-my-dead-body/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00349.warc.gz
en
0.981435
599
2.609375
3
The position in the file to start reading data. Set the position in the file. The .SEEK() method sets the position of the file pointer to a Byte Offset (measured in bytes) in relation to the beginning of the file specified by the file object pointer. This function is used to read and write data at a particular location in a data file. This script will change four characters in the Defaults file. The old text "VAR3" will be changed to the new text "XXXX". file_pointer = FILE.open("c:\a5\defaults.txt", FILE_RW_SHARED) Read the third group of 4 characters. file_pointer.seek(8) text = file_pointer.read(4) trace.writeln("Old value: " + text) Write the four characters at the same position. Read the third group of 4 characters to see the change. file_pointer.seek(8) text = file_pointer.read(4) trace.writeln("New value: " + text) file_pointer.close()
<urn:uuid:0ab2726d-74c5-493b-91bb-8f129f6cf44b>
CC-MAIN-2022-40
https://documentation.alphasoftware.com/documentation/pages/Ref/Api/Objects/System/File/FILE.SEEK%20Method.xml
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00349.warc.gz
en
0.661665
275
3.40625
3
What is a Star Schema? How does a Star Schema work? Numeric numbers and dimension attribute values are both stored in the fact table. As an example, consider the following: Numeric value fields are unique to each row or data point, and they have no correlation or relationship to data in other rows. These might include transactional details like the order ID, total money, net profit, order quantity, or precise time. The foreign key value for a row in a linked dimensional table is stored in the dimension attribute values, rather than data. This sort of information will be referenced in several rows of the fact table. It could hold the sales staff ID, a date value, a product ID, or a branch office ID, for example. The fact table’s supporting information is stored in dimension tables. Every Star Schema database has at least one dimension table. Each dimension table will be linked to a dimension value column in the fact table and will hold extra information about that value. An example of a Star Schema The employee dimension table may contain information such as the employee’s name, gender, address, or phone number, and may use the employee ID as a key value. A product dimension table can hold data such as the product name, manufacturing cost, color, and first-to-market date. Characteristics of Star Schema Because of the following characteristics, the Star Schema is ideally suited for data warehouse database design:
<urn:uuid:51cc18ae-ef38-4daf-b8b8-a791193d2ba5>
CC-MAIN-2022-40
https://lyftron.com/blogs/what-is-the-star-schema/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00349.warc.gz
en
0.837152
305
2.671875
3
This is a complete list of all Interskill self-paced elearning courses grouped by curriculum. Our learning designs are informed by contemporary learning theory and are interactive, contextualized and responsive to diverse learning styles. We believe in immersing learners in relevant authentic activities designed to motivate, engage, and produce quantifiable change. The Assembler Introduction course discusses the basics of mainframe assembler programming covering number systems, architecture, instructions, syntax and addressability. The Assembler Instructions course describes how to code instructions which will perform: arithmetic calculations, data comparisons and branching. Details of various Assembler linkage conventions and how they are used are also discussed. The Assembler Macros course describes the syntax and coding required to create an Assembler macro. The course also looks at the function of several system macros that are available for use. The Assembler Programming course addresses advanced programming techniques, discussing topics dealing with re-entrant programs and programs that utilize access registers to address data spaces. This course also looks at the interpretation of program listings. Assembler z/OS Advanced The z/OS Assembler course covers introductory concepts, Instructions, z/OS Architecture, etc. It instructs the student on programming using assembler language mnemonics, provides a machine code specific introduction to the z/OS architecture and covers number systems and program compilation / execution. Assembler Cross Memory Services The z/OS - Cross Memory Services course describes the components involved in cross memory communications that allow a service provider's address space to connect with a user's address space. You will also look at the linkage stack and service provider coding with an emphasis on Program Call (PC) routines and the cross memory user. Finally, you will see how data is traditionally copied in a cross memory environment, and also look at how this is performed using Data-in-virtual.View series → The Blockchain Technologies course provides you with a solid understanding of the business issues surrounding the emergence of blockchain, explaining its value and general structure. It then describes applications that are currently using this technology and provides an insight into its potential. For those just starting out, it suggests existing frameworks and platforms where blockchain can run, and because blockchain is still in its infancy, where more information of this topic can be obtained. The last module covers the more technical aspects of blockchain, discussing the use of hashes, block content, and how blockchain data is created and distributed throughout the network.View series → Introduction to CONTROL-M for z/OS The Introduction to Control-M for z/OS course begins by describing the need for workload scheduling, introducing BMC Control-M for z/OS and describing its general function. It then looks at how this product is accessed using a traditional 3270 interface, as well as the Control-M EM GUI. Defining and Scheduling Jobs Using Control-M for z/OS The Defining and Scheduling Jobs Using Control-M for z/OS course works through the process of creating definitions used to schedule jobs. It begins by introducing the concept of calendars, and describing how various types of calendars can be created. It then jumps to the actual job definition, describing how it can be used to provide diverse scheduling requirements. Automated actions performed following job completion are also discussed. The final module delves into the use of system and user-defined variables and describes how they can be used to dynamically modify JCL submitted from a Control-M job definition. Monitoring and Managing Job Processing The Monitoring and Managing Job Processing course describes how jobs are monitored using the Active Environment screen, looking at job statuses and commands used to manipulate jobs. It then turns to the monitoring and management of other Control-M components, including the log, conditions, and control and quantitative resources. In the final module, an overview of the utilities that run automatically, and those that can be invoked manually, are discussed, as well as the types of reports that can be run, are covered.View series → MainView – Overview of the MainView Environment This course introduces you to the architecture common to MainView products, discussing the function of several address spaces, and the Next Generation Logger. Following this, the MainView 3270 interface is examined, looking at its structure and commands that can be used to manipulate the screen content. Details describing how historical data is accessed is also provided. MainView – Advanced MainView Screen Functionality This course examines advanced MainView screen functionality such as hyperlinks and EZ Menus, which are used to quickly navigate between related MainView screens. Filtering and customization of screen data is covered in detail, highlighting options that can be used to more clearly identify specific data.View series → C/C++ on z/OS for C Programmers This course provides the user with a broad overview of C/C++ programming in the z/OS environment, providing examples of the types of z/OS subsystems that C/C++ programs can be used to interact with. Use of the IBM XL C/C++ compiler, and the role of the Binder are discussed when explaining how executable modules are created. The final section of this course discusses a number of z/OS features that C/C++ programmer need to be aware of when coding for the z/OS environment.View series → CA 1® Tape Management – Using Tape Media The CA 1® Tape Management - Using Tape Media course describes the use of the CA 1® Tape Management system, the online facility and its uses, and the standard daily maintenance processing required by CA 1. CA 1® Tape Management – Identifying and Resolving Media Problems The CA 1® Tape Management - Identifying and Resolving Media Problems course describes the structure of the CA 1 Tape Management system, its chaining process, AIVS and tape stacking facilities, problem determination, and the utilities available for correcting structural problems and reporting.View series → CA Workload Automation Restart Option for z/OS Schedulers Overview The Workload Automation Restart Option for z/OS Schedulers Overview course introduces the components, functions, and capabilities of CA 11, and describes how job information is stored and processed by this product. Managing CA Workload Automation Restart Option for z/OS Schedulers This course looks at the interaction that users have with CA 11 in order to determine the status and attributes of job information stored in the CA 11 database. It also identifies how CA 11 data can be used for generating reports. Common online commands and batch generation programs are discussed in detail, while the last module focuses on possible CA 11 problems and resolution and includes an overview of backup and recovery strategies associated with CA 11.View series → Introduction to CA Workload Automation – CA 7® Edition This course introduces the learner to the CA 7 environment and its structure. It describes the methods used to schedule a job and explains how jobs progress through the CA 7 system. A description of initialization parameters used and how they can be invoked at CA 7 startup are provided along with CA 7 general access and navigation instructions. CA 7 Workload Automation – Scheduling Batch Processing This course describes job attributes required when defining a job in CA 7 and explains how CA 7 keeps track of data sets used by jobs under its control. The Date/Time and Event driven scheduling methods are discussed thoroughly along with manual methods used to run ad-hoc jobs under CA 7. CA 7 Workload Automation – Monitoring and Managing the Batch Processing Environment This course describes techniques for monitoring CA 7 job throughput and the functions that can be applied to jobs residing on Request and Ready queues. Commands used to forecast workload demands are explained and advanced manipulation of CA 7-managed JCL using CA Driver procedures and global variables are provided. The use of virtual resources to control job submission is also covered along with CA 7 job documentation processes. CA 7 Workload Automation – System Programmer Interaction with CA 7 This advanced course describes how communication with CA 7 is possible using various interfaces and then focuses on the management and performance aspects of CA 7 using workload balancing macros and reporting. CA 7 Workload Automation – Backup, Recovery and Problem Resolution This advanced course covers the types of backup and recovery options, techniques and products that are available to CA 7. It looks at defining attributes associated with CA 7 disaster recovery mode and how some automatic recovery can be configured. Preventing and resolving common CA 7 issues is also provided.View series → CA Endevor™ Software Change Manager Introduction and Basic Usage The CA Endevor™ Software Change Manager Introduction and Basic Usage course provides an overview of change management practices and describes the role that CA Endevor SCM plays. A general description of CA Endevor SCM configuration defaults is provided along with details of foreground and batch processing tasks that can be performed. CA Endevor™ Software Change Manager Package Processing and Facilities The CA Endevor™ Software Change Manager Package Processing and Facilities course describes the purpose of packaging and how it is performed in foreground and batch modes. A comparison of functions available using CA Endevor SCM Quick Edit and the CA Endevor SCM ISPF Interface is discussed. The Query facility is also described, along with how to use the Automated Configuration Management data.View series → The CA OPS/MVS® Event Management and Automation – Overview, Components and Features The CA OPS/MVS® Event Management and Automation - Overview, Components, and Features course explains the need for system automation in today's enterprise environment and describes the role of CA OPS/MVS. An introduction to of OPSVIEW panels and OPSLOG is also provided. The CA OPS/MVS® Event Management and Automation – Rules and OPS/REXX The CA OPS/MVS® Event Management and Automation - Rules and OPS/REXX course describes how automation rules are defined and used in event automation, and how the EasyRule facility can be used for this purpose. An introduction to the CA OPS/MVS REXX interface and the associated commands, variables and functions that can be referenced, is also provided. The CA OPS/MVS® Event Management and Automation – Automating Events Using the Relational Data Framework The CA OPS/MVS® Event Management and Automation - Automating Events Using the Relational Data Framework course discusses the use of relational database tables and SQL to build event automation. The CA OPS/MVS® Event Management and Automation – Automating Events Using the System State Manager (SSM) The CA OPS/MVS® Event Management and Automation - Automating Events Using the SSM course describes how the System State Manager (SSM) is used to create relational tables used for event automation purposes. The testing of event automation using SSM is also discussed. The CA OPS/MVS® Event Management and Automation – Schedule and Group Managers for Event Management The CA OPS/MVS® Event Management and Automation - Schedule and Group Managers for Event Management course introduces the Schedule Manager and explains how it is used to manage automated events. The function of the Group Manager and its role in managing automated events is also discussed.View series → CICS TS – CICS Transaction Server Introduction 5.6 The CICS TS - CICS Transaction Server Introduction course provides an overview of the CICS Transaction Server product and how it is used to process work. A description of the components that comprise CICS TS and how they are integrated. CICS TS – CICS Explorer Fundamentals 5.6 This course describes how to access a CICS TS system using CICS Explorer, and how the CICS Explorer window can be modified to display various CICS TS data. Details describing how CICS Explorer help can be accessed, and the creation and integration of customized help is also covered. CICS TS – Controlling CICS Transaction Server Operations 5.6 The CICS TS - Controlling CICS Transaction Server Operations course focuses on CICS startup and shutdown processes and commands, and the handling of system-related CICS problems. CICS TS – CICS Command Simulation 5.6 A number of simulations are provided that the student can use to assess their skills and knowledge in relation to the entering of commands, and interpretation of output produced, when monitoring and manipulating CICS resources, and starting/stopping CICS.View series → CICS TS – Programming Basics 5.6 The CICS TS - CICS Programming Basics course provides an overview of the CICS Transaction Server product and describes how it is used to process work. It looks at the application code required for programs working with CICS, using various programming languages. It also describes the major interfaces used to interact with this product. CICS TS – Program Control and Communication 5.6 The CICS TS - Program Control and Communication course describes the methods used to transfer data from one CICS program to another, and the commands used to achieve this. It also explains the various CICS communication facilities and features that can be used for interaction between CICS programs and other programs: both within and external to CICS. This course also introduces the CICS programmer to more basic CICS issues including serialization, threadsafe, containers, and CICS data areas. CICS TS – Files and Databases 5.6 This course discusses how CICS applications can be used to access and update data, and also looks at the code and interfaces required by CICS to communicate with Db2. CICS TS – Storage and Transient Data 5.6 This course looks at some of the features provided by CICS for application programs, including; storage, temporary storage queues, and transient data queues. CICS TS – Programming for Web Access 5.6 This course discusses the options available to programmers when there is a need to connect to CICS using web-based interfaces. It addresses the HTTP, SOAP, and JSON protocols and the code required to send and receive requests from them. CICS TS – Using CICS Transaction Gateway 5.6 This course looks at the CICS Transaction Gateway product, describing when and how it is used to facilitate communication with CICS. CICS Terminal Communications 5.6 The CICS TS - CICS Terminal Communications course looks at traditional CICS communication methods with terminals, and expands this further by describing how Basic Mapping Support (BMS) maps are created and used in today's environment. CICS TS – Using SDF II to Maintain CICS TS Maps 5.6 This course provides you with an overview of the SDF II product and describes how it is used to create BMS maps. CICS TS – Programming for Recovery 5.6 This course addresses how CICS code, and general CICS system facilities can be configured to handle errors, and perform recovery. CICS TS – Debugging CICS Programs 5.6 The CICS TS - Debugging CICS Programs course looks at several different CICS supplied transactions and system facilities that can be used to identify, and diagnose problems. The CEMT command is discussed in length, providing examples that show how CICS resource details are displayed and updated. The invoking of CICS debugging transactions and interpretation of results is covered, as well as the use of system dumps and traces.View series → For organizations to successfully transition their business to the cloud they should consider strategies that will minimize associated risks. This course introduces concepts around what cloud adoption and cloud governance is and what strategies and processes are needed to ensure a business utilizes tools to aid it in a successful transition to the cloud. Introduction to Cloud Computing This course provides an overview of cloud computing concepts including key characteristics, cloud services, and cloud deployment models. It discusses the mainframe’s role in cloud computing and then looks at the future developments around cloud computing. Understanding Cloud Architecture This course looks at cloud architecture and what technologies and methods are utilized to develop and deploy modern cloud applications. This course provides an overview of the security measures that need to be considered when implementing an organization's cloud environment. It discusses the importance of security and compliance, and highlights some of the tools and security mechanisms that can help ensure an organization's data integrity and security is maintained.View series → Coaching and Mentoring for Technical Specialists The Coaching and Mentoring for Technical Specialists course begins by explaining how learning has evolved from the traditional on-the-job and classroom training, to a myriad of learning resources to suit a wide range of people. Coaching and mentoring are introduced, describing how they differ and the benefits they can provide. These items are then discussed in more detail outlining various coaching and mentoring models, and how they can be implemented and managed. Several scenarios involving technical coaching and mentoring are presented, allowing you to relate to how these types of programs can run in your environment.View series → COBOL Programming – Basics The COBOL Programming Basics course introduces the COBOL language and its basic structure. It describes the syntax and use of program logic statements in the procedure division of a COBOL program. It examines the standard loop and conditional statements, and the available arithmetic operations. It also describes the use of basic screen and printing instructions. Data and Datafile Definitions in COBOL The COBOL Data and Datafile Definitions course explains how the COBOL programming language describes and defines data. It also shows how COBOL data definitions can be used to manipulate the way data is used. It explores display and computational formats, and the use of redefines to reference data in different ways. COBOL Programming – Manipulating Data The COBOL File Handling course describes how COBOL can be used to define and process several of the common file types used in system processing. It details how sequential and direct files can be defined in the environmental division of the program, and the instructions and processes used to access data sequentially and directly through an index. COBOL Programming – Advanced The COBOL Programming - Advanced course examines the use of tables in a COBOL program, and the methodologies used for file sorting. It details the use of subprograms and the linkage section. It also shows how parameters are passed to a program. COBOL – IBM Enterprise COBOL 6.3 for z/OS The COBOL - IBM Enterprise COBOL 6.3 for z/OS course is designed for learners with a basic understanding of generic COBOL who need to extend its use to the z/OS environment. It describes how COBOL programs are made available through compile and bind processes and discusses coding and options specific to the z/OS environment. The use of IBM's Language Environment is presented, and a number of coding techniques used to improve the performance of COBOL running on z/OS, is also shown. Accessing IMS Databases from COBOL The Accessing IMS Databases from COBOL course details the structure and use of an IMS/DB database. It gives examples of the DL/I data access language and shows how to use DL/I in COBOL programs to read and update IMS data. The concept of backup and recovery, particularly in the context of batch programming runs, is also explained.View series → This course begins by discussing the evolution of encryption, describing its importance, benefits and how pervasive encryption has only recently become a viable solution for organizations looking to meet data security requirements and compliance regulations. Major components of the z14 infrastructure are provided, describing how at-rest, and in-flight data associated with those components supports new and existing encryption capabilities. Implementing Pervasive Encryption on z/OS – Expert Videos This expert video series introduces and discusses the types of z/OS data you should consider encrypting and the levels of encryption available. It begins by looking at full disk encryption, then moving to methods used for encrypting individual disk data sets. Information on encrypting other at-rest data residing on tape and the coupling facility is presented, as well as how unique data such as JES2 spool data sets, and database data can be secured. A look at in-flight data and how that is encrypted is also discussed. Finally, some best practice for determining which data you should encrypt is presented.View series → Big Data, Hadoop, and Analytics This course is designed to introduce and guide the user through the three phases associated with big data - obtaining it, processing it, and analyzing it. The Introduction to Big Data module explains what big data is, its attributes and how organizations can benefit from it. It also provides a snapshot of job roles, and available certification and training, in order to forge a career in big data. Machine Learning and Spark This course is designed for those working with organizations looking to implement Machine Learning solutions. It is also of benefit for those looking to implement Spark on z/OS. It begins by explaining what Machine Learning is, how it works, and how organizations can benefit from it. The course then focuses on IBM's Machine Learning for z/OS solution, describing its features and components. In the final course, a description of Apache Spark and how it is used in a Machine Learning solution on a z/OS system, is presented.View series → Storage – Introduction to Storage and Disk Systems 2.4 This course describes how data center storage has evolved and its future in this environment. It then focuses on the hardware and software that comprises today's disk systems and how this meets the need of the data center. Storage – Understanding Tape Storage 2.4 This course discusses how tape usage in data centers has evolved and looks at the purpose of this medium in today's environment. An overview of tape storage capabilities is discussed before looking at the emergence of virtual tape and how it is either replacing traditional tape systems, or working with them, to meet the data storage demands of the enterprise. Storage – Networks, Administration, and DASD Management Using ICKDSF 2.4 This course provides an overview of network storage configurations and the monitoring and management tasks associated with the Storage Administrator role are also discussed. Storage – Managing z/OS Data Using DFSMS Constructs 2.4 This course introduces you to the family of DFSMS products that are used to manage z/OS data and then focuses on the creation and implementation of data, storage and management classes, as well as storage groups, to automate processes in the storage environment. Storage – Storage and Tape Administration Using DFSMShsm and DFSMSrmm 2.4 Initial content discusses space administration needs looking at data center backup and migration requirements, and then showing how this is achieved using DFSMShsm. The management of tape volumes and labels using DFSMSrmm is also covered in detail.View series → Db2 – Introduction to RDBMSs and Db2 v12 The Introduction to RDBMSs and Db2 course describes from a Database Administrator's (DBA) viewpoint how Db2 is used and the types of Db2-related tasks that the DBA performs. The course also looks at Db2's system configuration requirements and how it is implemented in a z/OS environment. Db2 – Manage Data Definitions with Db2 v12 The Manage Data Definitions with Db2 course describes how SQL is used to define a Db2 database and its associated objects. It looks at SQL statement syntax and the methods used to invoke SQL statements. Db2 – Db2 SQL Fundamentals V12 The Db2 SQL Fundamentals course looks at some of the more common SQL statements used by programmers when starting out. It addresses the code used to obtain Db2 table data, sort it, as well as methods used for inserting, deleting, updating and merging table data. Db2 – Advanced Db2 SQL V12 The Advanced Db2 SQL course discusses some of the more advanced SQL code used to manipulate table data. Various methods used for joining tables is presented, along with examples of SQL statements and subqueries used to filter data results. Db2 – Create and Maintain Db2 Programs v12 The Create and Maintain Db2 Programs course describes how SQL is invoked from an application program and the interaction that can occur between the application program and Db2. This course also discusses how a Db2 COBOL Program is created. Db2 – Db2 Stored Procedures v12 The Db2 Stored Procedures course describes how stored procedures are used and the platforms on which they can be implemented. The benefits derived from using stored procedures are discussed as well as security implications associated with them. Db2 – Optimize Db2 Application Performance v12 The Optimize Db2 Application Performance course describes the methods used by Db2 when processing application programs containing SQL, and provides details of the tools and utilities that can be used to measure and analyse their effectiveness.View series → Db2 – Db2 Fundamentals v12 The Db2 Fundamentals course describes what Db2 is, how it is used and the components that comprise its structure. An overview of the SQL language, which is used to communicate with Db2, is provided along with details on how it is used with SPUFI. Db2 – Managing Db2 Operations v12 The Managing Db2 Operations course looks at Db2 from an Operations viewpoint, describing Db2 startup and shutdown, common operator tasks, message interpretation and restart and recovery considerations. A number of commands used to analyze the status of Db2 components are discussed as well as the facilities associated with backup, recovery and restart.View series → DevOps – Introduction to DevOps in the Workplace The DevOps - Introduction to DevOps in the Workplace course begins by discussing traditional software development and deployment, and how DevOps can be used to improve this process. A holistic view of DevOps is broken down to its core components describing the people and processes involved with each phase. The continuous DevOps integration, delivery, and deployment phases are explained along with common release management deployment techniques. An overview of the types of monitoring and reporting required to measure the effectiveness of DevOps practices is also provided showing how it feeds back into the DevOps cycle. The Agile Fundamentals course discusses the business value of adopting an Agile philosophy, introducing a real-life software development project, and describing how Agile is used to restructure traditional software development and deployment tasks. The values and principles associated with the Manifesto for Agile Software Development are explained, while an overview of common frameworks used to adopt Agile philosophes is provided, along with their benefits. Additional Agile training, accreditation, and related supported products and practices are also mentioned throughout the course.View series → Ensuring Data Center Business Continuity This course begins by immersing the learner in a full scale disaster, getting them to think about all the elements involved in not only recovering data, but also ensuring that the overall business runs as per expectations. It then describes what business continuity is, citing well known events, and where disaster recovery fits in. The course is then extended to explain common business continuity strategies and looks at standards, in particular ISO22301, to see how current standards tackle this important facet of business life.View series → IDz – IBM Developer for z/OS Basics This course introduces the programmer to the IBM Developer for z/OS (IDz) product describing its purpose and features, how it is installed, and access to resources on a host system. IDz – Creating and Managing Applications Using IDz This course describes the methods used to create, manage and maintain applications under IDz. It provides details on the benefits of projects and subprojects within IDz and the application tasks that can be performed within this structure. IDz testing and debugging capabilities are discussed in detail, in particular the ZUnit testing framework, and the IBM z/OS Debugger. The final module addresses the IDz features that allow you to create applications for Db2, CICS, and IMS.View series → z/OS Explorer – IBM Explorer for z/OS The IBM Explorer for z/OS (also known as z/OS Explorer) course discusses the evolution of this product and how it fits into IBM's strategy of producing powerful modern looking tools that can be used easily by both experienced and entry-level personnel. The product's eclipse-based framework is discussed in detail with considerable emphasis on the use of the Remote System Explorer (RSE), z/OS and Resource perspectives, and the related views used to display and manage z/OS data.View series → IBM Mainframe Communications Concepts The IBM Mainframe Communications Concepts course provides an overview of traditional SNA and TCP/IP communication protocols and the logical and physical components associated with them. The VTAM commands course discusses the use of commands to display the status and attributes of VTAM resources. An explanation of the processes used to start, activate, deactivate and stop VTAM resources is also provided. Mainframe TCP/IP Commands This course provides the learner with a basic understanding of IBM mainframe networks with z/OS. It introduces traditional SNA subarea, SNA APPN networks and mainframe TCP/IP networks, including some of the network equipment associated with each. Current topics such as the future of SNA, and concepts of using TCPIP to transport SNA traffic are also covered. Finally, the learner is introduced to basic VTAM and TCP/IP commands needed to use, control and investigate mainframe networks. VTAM Command Simulations A number of simulations are provided that the student can use to assess their skills and knowledge in relation to the entering of commands, and interpretation of output produced, when monitoring and managing VTAM.View series → Z Performance – Introduction to Mainframe Performance The Introduction to Mainframe Performance course provides the learner with a core understanding of what performance measures are required when managing a mainframe environment. Measuring the usage of critical resources is discussed, and potential issues that can affect the performance of tasks running in a z/OS system are presented. Z Performance – z/OS I/O Performance and Capacity Planning In this course you will examine the I/O process and see how I/O performance problems are detected, and the metrics used to determine where a problem may exist. Methods used to improve I/O performance are also discussed. Z Performance – z/OS Performance Tools and Software Pricing In this course you will discover how SMF is used to capture important system activity and store it as specific record types. You will see how these records are structured and the utilities used to convert their content into a readable format. Commands used to display, configure and manipulate SMF are covered, as well as the process of archiving SMF records and creating your own SMF records. Following this, an introduction to software licensing is presented, describing common licensing models and the metrics they use to determine the cost to the customer. This information will assist the user in determining ways to minimize software licensing costs. Z Performance – z/OS Workload Manager The Z Performance - z/OS Workload Manager course provides the learner with steps describing how WLM components are created and linked, to form a WLM policy. The course then progresses to discussing in detail various workloads and the goals and importance that should be assigned to them. This is followed by an overview of performance information that can be obtained through SMF records, MVS commands, and SDSF.View series → IBM MQ – Introduction to IBM MQ This course provides the learner with basic information about IBM MQ, initially describing how it is used and then branching out discussing its features. A detailed breakdown on IBM MQ components and their structure are provided, providing you with an overview on how it could be configured in your environment. Finally, the use of IBM MQ in a z/OS environment is covered with details on how it differs from other platforms. IBM MQ – MQ Operations and Administration This course begins by describing IBM MQ, and its common deployment options, and then is expanded to show how an IBM MQ queue manager is created. Various commands used to interact with MQ components are discussed throughout the remaining content, showing how definitions are created and displayed, and modifications that can be made to them. A focus on the security of MQ resources and the authentication required to access them is also presented. IBM MQ – MQ Operations and Administration for z/OS This course looks at the differences between traditional MQ, and how it is implemented and run in a z/OS environment. It discusses the use of z/OS datasets and files that need to be created as well as the procedures used to enable MQ in z/OS. You will see how traditional MQ commands map to z/OS, and how MQ resources are managed using z/OS online facilities and batch utilities. Security for MQ resources on z/OS is examined along with tools and utilities used for monitoring aspects of MQ performance. IBM MQ – MQ for Application Programmers This course begins by identifying the basic programming code used by applications to interact with IBM MQ. It describes the MQ messaging process and then delves more heavily into the commands that can be used to get and put MQ messages, and manage MQ objects. Details on programming with MQ in a z/OS environment is provided, explaining core differences when using that platform.View series → This course is designed to provide a system administrator with no prior AIX experience an introduction to the background and the fundamental components of AIX. The course covers essential knowledge including concepts, system access and management, and commonly used administrative commands. AIX Fundamentals for UNIX System Administrators This course is designed to provide existing UNIX administrators a path to understanding the critical differences with AIX. Topics examined include the essential components of AIX, system management, performance improvements, and AIX system troubleshooting. AIX Virtualization, VIO Server and Management This course is designed to provide the learner with an understanding of the tasks involved in creating and managing a virtualized AIX environment, and assumes the learner has a basic understanding of AIX. Topics covered include the fundamentals of virtualization in AIX environments, an examination of the Virtual I/O Server, and the use of management devices such as the HMC, IVM, or SDMC.View series → IBM i – Fundamentals The IBM i Fundamentals course provides learners with an introduction into IBM i from an operations point of view. IBM i – Introduction to IBM i for System Operators The IBM i Introduction to IBM i for System Operators course provides learners with an introduction into IBM i from an operations point of view. The course will familiarize you with the IBM Navigator for i and 5250 emulation interfaces and provide examples of some of the tasks that can be performed using them. IBM i – Monitoring and Managing IBM i Workloads The IBM i Monitoring and Managing IBM i Workloads course provides learners with an overview of the processes involved in monitoring, managing and controlling IBM i workloads and printing is also provided.View series → IBM i – CLP – Control Language Programming The CLP - Control Language Programming course introduces programming that uses the IBM i Control Language (CL). It explains how to use the variables utilized in a CL program and control its processing. IBM i – CLP – Programming Functions and Messaging The CLP - Control Language Programming Functions and Messaging course describes the more advanced features of Control Language programming. It focuses on how to use CL message handling to monitor the correct execution of CL programs.View series → Query – Database Basics and the Need for Query The Query - Database Basics and the Need for Query course describes the IBM Query for i sort, report and analyze capabilities providing examples on how IBM Query for i is used to create business reports. Basic concepts relating to database structure and the data that can be accessed by IBM Query for i are explained in detail. Query – Creating a Simple Query The Query - Creating a Simple Query course begins by describing how database files are joined, enabling IBM Query for i to more easily reference data. This is followed by the identification of data that will be required by IBM Query for i and the subsequent coding required to extract that information. Details on options used to save the query and print, or display, the resulting report are also covered. Query – Advanced Query Features and Management The Query - Advanced Query Features and Management course begins by looking at more complex capabilities associated with query definitions. It then moves into the management of queries, explaining how queries can be copied, modified, deleted, printed and run. The Data File Utility and its purpose are discussed, and finally a number of tips when managing queries are provided.View series → The RPG/400 - Introduction explains the fundamental features and structure of a Report Program Generator (RPG) program. It also describes the concepts of RPG programming. The RPG/400 - Coding course explains the fundamental Report Program Generator (RPG) operation codes that enable programmers to manage field values, perform numeric operations, and manipulate dates for retrieval and viewing. The RPG/400 - Programming course explains how to use Report Program Generator (RPG) features to write reports. It also describes structured programming and how to affect the flow of control. RPG/400 Workstation Programming The RPG/400 - Workstation Programming Introduction course shows how to design and develop screen layouts and how the program can interact with the user to deliver and accept data. RPG/400 Advanced Workstation Programming The RPG/400 - Advanced Workstation Programming course shows how to use the standard IBM i tools to write and compile RPG/400 programs. This course also covers the design and use of dynamic databases in your program.View series → IBM i – System Administration Fundamentals The IBM i System Administrator Fundamentals course introduces the learner to administrator tasks they will be expected to perform and begins with the basics; the role of the IBM i system administrator, configuration, backup and recovery strategies, and monitoring tools. IBM i – Security Implementation The IBM i Security Implementation course introduces the learner to tasks they will be expected to perform relating to security implementation, managing user access and authorities, resolving security problems, and security auditing. IBM i – Journal Management The IBM i Journal Management course introduces the learner to journaling and how to manage journals. IBM i – Storage Management The IBM i Storage Management course introduces the learner to IBM i storage management and redundancy. IBM i – Logical Partitioning and Virtualization This course looks at logical partitioning and the benefits that can be gained from implementing this feature on your IBM i system. Details of the products that can be used to define partitions as well as the process itself is provided, and the management and performance of partitions is also discussed in detail.View series → SA z/OS – IBM Z System Automation The SA z/OS - IBM Z System Automation course provides the learner with a basic understanding of what IBM Z System Automation is and how it can be used in today’s modern enterprise. The course begins with looking at some of the benefits the product provides in managing system resources through automation. The course also covers the various components that enable automation functionality. Finally, the course delves further into what tools are available for IT personnel to interact with use with IBM Z System Automation. SA z/OS – IBM System Automation: Planning, Installation, and Customization The SA z/OS - IBM System Automation: Planning, Installation, and Customization course, takes the user through the steps required to install, and configure SA z/OS V4.2 onto their system. Initial content covers implementation considerations, and provides the learner with tools used in the planning and installation process. The configuration assistant section describes how some of these implementation tasks can be automated. The Customization Dialog module describes how this product is used to build an SA z/OS automation policy database. It guides the learner through the creation of database entries, to the compilation of the Systems Operations Configuration File, enabling it for distribution and use by an automated system. SA z/OS – IBM System Automation: Operations The SA z/OS - IBM System Automation: Operations course, focuses on the SA z/OS administration and monitoring tasks performed by operations and administrator personnel. Initial content looks at initializing SA z/OS, and describes start-up options that can be invoked for the automation manager and automation agent. Tasks that allow you to refresh automation configuration data, and enable automation through the use of automation flags, pacing gates and runmodes is also discussed. SA z/OS functionality that may unintentionally inhibit automation activity, and how these issues are resolved is also covered.A detailed look at the commands and tools used to display and manage SA z/OS activity is provided, and details associated with diagnosing and resolving common problems is discussed. SA z/OS – Automation Definitions Introduction and Workshop The SA z/OS - Automation Definitions Introduction and Workshop course, begins by describing the key Entry Types that can be defined in an SA z/OS policy database, and their purpose. Following this, a workshop-style module provides you with hands-on exercises used to create an application, application group and related automation definitions. SA z/OS – Advanced Automation and Reporting The SA z/OS - Advanced Automation and Reporting course, introduces the learner to some more advanced implementation and configuration possibilities focusing on end-to-end automation, and the automation of CICS, IMS, and Db2 environments. It also looks at various methods used to report on the policy database content, automation activity, and statistics produced by SMF relating to automation usage.View series → zWS – Understanding How IBM Z Workload Scheduler Processes Work The Understanding How IBM Z Workload Scheduler Processes Work course discusses the need for workload scheduling in today's enterprise organization, and provides general information describing how IBM Z Workload Scheduler (formerly IBM Z Workload Scheduler for z/OS, or TWSz) processes jobs. zWS – Monitoring and Managing the IBM Z Workload Scheduler Environment The Monitoring and Managing the IBM Z Workload Scheduler for z/OS Environment course describes how IBM Z Workload Scheduler is used to monitor and manage batch processing flows. Details relating to job restart and recovery using this product, are also provided. zWS – Scheduling with IBM Z Workload Scheduler The Scheduling with IBM Z Workload Scheduler course explains how JCL is configured for the IBM Z Workload Scheduler (formerly Tivoli Workload Scheduler for z/OS, or TWSz) environment and how job schedules are created. zWS – Maintaining the Integrity of IBM Z Workload Scheduler The Maintaining the Integrity of IBM Z Workload Scheduler for z/OS course describes the creation and modification of current plans and long-term plans and the backup and recovery associated with them.View series → IMS 15 Introduction The IMS 15 Introduction course provides a broad overview of IMS describing its purpose, strengths and weaknesses, functional components, and processing concepts. IMS 15 Commands The IMS 15 Commands course explains the different methods in which IMS commands can be invoked and provides examples of commands used to display various IMS system activity. A detailed description of the IMS startup and shutdown process and the associated commands is also provided. IMS 15 Databases The IMS 15 Databases course covers in detail how data is stored within an IMS database and describes how it is referenced and accessed from a number of different sources. Instructions describing how to create database definitions, allocate databases and components, are also provided. IMS backup and recovery strategies are discussed as well as the use of maintenance utilities used in day-to-day operations. IMS 15 Transaction Manager for Programmers This course describes how IMS Transaction Manager (TM) is used by application programs to communicate with an organization's database content. It describes how IMS TM processes messages and the types of requests it can receive from application programs. The student is then shown how to code an IMS program and prepare it for execution. Examples using COBOL, PL/I, C, and Pascal are provided. Details on how the completed program needs to be defined to IMS is covered, as well as the use of terminals and how they are configured for IMS TM use.View series → IBM (z/OS) – Introduction to the IBM Enterprise Environment This course examines what a mainframe is, why it has survived and the IT personnel that need to interact with it. It then discusses the basic hardware, software and networking components and the methods used to access and process data on the mainframe. IBM (z/OS) – z/OS Systems Programming Fundamentals This course provides the learner with a more in-depth view of the z/OS system and covers concepts such as virtual storage, system initialization and how system data sets and parameters can be modified in response to system and network issues. IBM (z/OS) – IBM Development Environment Overview This course discusses the use of mainframe data sets and database files to store organizational data and examines the batch and online methods used to process that data. Widely used mainframe programming languages such as COBOL, PL/I, C++, REXX, CLIST and Java are introduced, and a description of the Language Environment used to provide many of these languages with common runtime routines is presented.View series → Java on z/OS for Java Programmers This course is designed for Java programmers who need to port their skills and knowledge to Java in a z/OS environment. It explains how Java uses features associated with z/OS UNIX, and is supported by Java Software Development Kit. A step-through showing how Java programs are compiled and run in the z/OS environment confirms the similarities between this platform and other Java-enabled environments. You will also see how Java programs can be invoked from batch, CICS, IMS, Db2, and WebSphere. Java Introduction for the IBM Enterprise This course is intended for experienced Mainframe Programmers, particularly COBOL programmers who need to understand Java and the basic concepts of object orientation and how it differs from programming languages traditionally used in an enterprise environment. The majority of content focuses on the structure of the Java language, and the tools and utilities that support the Java environment. Java Programming for the IBM Enterprise This course is intended for experienced Mainframe Programmers, particularly COBOL programmers who need to be able to use Java as an alternative language to COBOL and to use Java to extend enterprise systems to the Internet. A breakdown of code commonly used with Java programming is supplied, providing equivalent COBOL examples along the way. Java Data Access for the IBM Enterprise This course is intended for experienced Mainframe Programmers, particularly COBOL programmers, or Java programmers new to the IBM enterprise environment who need to understand the following: the Java datafile and database access, I-O methods, the special requirements and facilities used to access the IBM Enterprise systems unique data storage facilities, to use JavaBeans as reusable objects and enterprise JavaBeans for accessing the facilities provided by enterprise systems.View series → JCL (z/OS) – Introduction to JCL The Introduction to JCL course discusses the organization's need to run batch processing, describing the people that utilize it, and the types of tasks performed with it. It explains where batch job JCL can be stored and the tools that can be used to access it. From there the course moves into the structure of JCL, explaining the basic syntax requirements and the types of accompanying parameters. JCL (z/OS) – JCL Coding Basics – JOB and EXEC Statements This course describes the purpose of commonly used JOB and EXEC JCL statements, and concentrates on the parameters encountered when working with these statements. JCL (z/OS) – JCL Coding Basics – DD Statements The DD statement is the most often used JCL statement, responsible for defining the input and output resources required when running a program. This course describes the parameters required when dealing with existing data sets, and if needing to create new ones. It also looks at the coding for printed output and job sysout. JCL (z/OS) – Advanced JCL Data Set Use The courses presented to date have concentrated on simple sequential and partitioned data sets. In this course you will look at other types of data that can reside on a mainframe, in particular VSAM data sets and z/OS UNIX files and how they can be accessed. You will also see the benefits of creating generation data sets and the JCL code used to create and reference them. The last module concentrates on placing data onto tape, providing some best practices when dealing with this medium. JCL (z/OS) – Controlling Job and Step Processing While JCL is generally rigid in the way that it runs programs and related jobs, in recent years there have been several advancements in code that can be used to conditionally run steps, and schedule jobs. This course discusses the use of the traditional COND parameter to control step processing, and the use of the IF/THEN/ELSE/ENDIF construct as an alternative. New basic job scheduling capabilities are also discussed. JCL (z/OS) – Working with Procedures and Symbols Previous courses have described many of the statements and parameters to build a basic job. This course looks at some advanced JCL capabilities including the storing of JCL code externally and calling it in the form of a procedure or an INCLUDE group. You will also see how symbols can be incorporated into JCL, and the benefits and flexibility they can provide. JCL (z/OS) – Running and Debugging JCL In previous JCL courses you have been presented with many examples of the types of errors that can be produced when running your JCL. This course consolidates many of these and looks at general problem and resolution practices associated with batch job submission, resource allocation, and abends. JCL restarts are also discussed, identifying any processing clean-up that needs to be performed, and the methods used to rerun or restart your job. JCL (z/OS) – Advanced – Tips & Tricks This course contains many JCL-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers a number of new areas of functionality associated with z/OS 2.2 2.3 and 2.4, and provides details on statements and parameters that have evolved over the last few years.View series → JES2 – System Initialization and Shutdown v2.4 This course describes how and why JES2 evolved and introduces the major JES2 components, their purpose, and general terminology. A number of scenarios are presented that describe how JES2 devices are used, their possible statuses, and how jobs are processed. The final module discusses operational aspects associated with JES2 including how it is automatically and manually started and stopped, and commands that can be used when there are problems with these processes. JES2 – Monitoring Batch Jobs with JES2 This course describes the JES2 command syntax and provides numerous examples explaining the scenarios in which JES2 commands are used. After an initial overview of commonly used JES2 commands, modules focus on the commands used for displaying printer, initiator, and batch job attributes and status. JES2 – Using JES2 in Scheduling Batch Jobs v2.4 This course builds from the previous and discusses some of the more commonly used JES2 commands used to manipulate initiator, printer resources and batch job activity. A focus on the JES2 commands associated with modifying batch job attributes as they progress through the JES2 queues is also provided. The display and management of network related JES2 components is also discussed. JES2 – Identify and Resolve JES2 Batch Problems v2.4 This course provides you with examples of common JES2 batch job-related problems and explains the process and JES2 commands that are used to display, analyze and resolve those issues. Detailed information relating to the function of JES2 checkpoints, their placement, and attributes is provided along with steps required to resolve problems associated with this resource. JES2 – Identify and Resolve JES2 System Problems v2.4 This course looks at JES2 initialization parameters used to define JES2 system resources and the subsequent JES2 commands used to display and resolve problems that occur with these items. The JES2 shutdown process is revisited in more detail, providing information on problem resolution techniques if JES2 cannot be shutdown gracefully. JES2 – Command Simulations v2.4 A number of simulations are provided that the student can use to assess their skills and knowledge in relation to the entering of commands, and interpretation of output produced, when monitoring and manipulating the JES2 subsystem and its resources. JES2 – Advanced – Tips and Tricks v2.4 This course contains many JES2-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers several new areas of functionality associated with z/OS 2.3 and 2.4.View series → JES3plus – JES3plus Fundamentals This course provides an introduction to the evolution of mainframe processing, which led to the development of the job entry subsystem, JES3plus. Key concepts about JES3plus and how it manages system resources and workflow are covered along with details on who needs to interact with JES3plus regularly. JES3plus – JES3plus for System Operators This course gives system operators insight into JES3plus commands that are used in performing inquiries on jobs and devices, how to modify a job’s properties, and how to vary the status of devices. Also discussed are commands used to start and stop JES3plus and what start options can be used and when. JES3plus – JES3plus for System Programmers This course provides learners with an introduction into how JES3plus is initialized at start up and how the initialization stream is used to identify system resources to JES3plus. Following on, the learner is shown what resources need to be defined to JES3plus such as spool data sets, checkpoint data sets, mains, storage, and buffers among others which are all vital to JES3plus’ processing functionality. JES3plus – JES3plus for Application Programmers In this course application programmers are shown how, through the use of job control language (JCL), JES3plus control statements can be used to provide special instructions to JES3plus to perform on their jobs during job processing. Also discussed is how deadline scheduling can be implemented to make the best possible use of available resources along with dependent job control which can be used to control the flow of jobs based on specific conditions.View series → Introduction to Linux The Introduction to Linux course provides you with an overview of the Linux operating system and describes how it is used in today's System z environment. Information on interfaces used to access the Linux environment and standard communication tools are also discussed. The Linux File System The Linux File System course describes the file structure within the Linux environment and explains how files are accessed, displayed and manipulated. Details of security measures in relation to Linux files is also provided. A number of general tasks associated with monitoring and managing the Linux file system are also discussed. Editing with VI The Editing with vi course describes how the vi Editor is used to open, and update text files. Editing techniques such as searching, filtering, finding, copying and replacing text is covered and some advanced material relating to the editor configuration and programming support is also provided. Linux Shell Programming The Linux Shell Programming course describes the use of coding components such as variables, parameters, expressions, and functions that can appear within a shell script. Details relating to conditional execution and looping that can be programmed into the script is supplied along with the handling of script errors. The Linux Operations course describes the purpose of Linux Processes and explains how these activities can be monitored and managed. Information describing how to create Linux jobs and optionally schedule them to run is discussed along with other operational tasks relating to system logs and shell customization. Linux on IBM Z Fundamentals The Linux on IBM Z Fundamentals course discusses available Linux distributions for the IBM Z environment, its operational implementation, and the general monitoring and management of Linux. The final module provides an overview of the performance monitoring and management tasks performed by the Linux Administrator, and contains tips for best practice in these areas.View series → Introduction to Sampling Performance Tools This course introduces sampling-based performance tools such as Compuware Strobe, IBM Application Performance Analyzer, and Macro4 FreezeFrame, which are available for z/OS environments. It describes what these tools are, the information that they provide, and how this assists with application performance tuning. It includes information on the tools currently available, performance implications when using them, and how to minimize any impact they may have on the system. It also steps through how sampling sessions can be started, and the key parameters for such sessions. Mainframe Application Performance Tuning A general introduction to mainframe application performance. In this course, you will be introduced to the basic concepts of improving application performance by tuning. This course includes suggestions on when and why to tune, tuning for CPU vs service time, and also considers batch and online performance objectives.View series → Labs – COBOL (z/OS) – COBOL Basics The Labs – COBOL (z/OS) – COBOL Basics course provides a range of exercises that can be run in your organization’s mainframe training sandbox. Initial exercises focus on compiling, binding and executing existing COBOL programs, and then move to creating basic COBOL code used to display formatted output. Labs – COBOL (z/OS) – COBOL and VSAM Data Set File Processing The Labs – COBOL (z/OS) – COBOL and VSAM Data Set File Processing course provides a range of exercises that can be run in your organization’s mainframe training sandbox. Exercises in this course focus on copying data into a VSAM data set, and then for part of that data to be updated. Labs – COBOL (z/OS) – COBOL and Sequential Data Set File Processing The Labs – COBOL (z/OS) – COBOL and Sequential Data Set File Processing course provides a range of exercises that can be run in your organization’s mainframe training sandbox. Exercises in this course focus on accessing records from a sequential data set and performing various actions with that data. Labs – COBOL (z/OS) – Working with COBOL Data The Labs – COBOL (z/OS) – Working with COBOL Data course provides a range of exercises that can be run in your organization’s mainframe training sandbox. Exercises in this course are more difficult than those from the Labs – Basic course, focusing on a range of arithmetic operations, calling programs and passing information between programs. Labs – TSO/ISPF (z/OS) The Labs TSO/ISPF course provides a range of exercises that can be run in your organization's mainframe training sandbox. Initial exercises focus on modifying TSO session defaults and allocating data sets, while later modules deal with copying and manipulating mainframe data using ISPF panels and utilities. Labs – JCL (z/OS) The Labs JCL (z/OS) course provides a range of JCL exercises that can be run in your organization's mainframe training sandbox. Initial exercises focus on basic coding and resolving syntax errors and increase in complexity throughout the course. Labs – JES2 (z/OS) The Labs - JES2 (z/OS) course provides a range of exercises that can be run in your organization's mainframe training sandbox. Initial exercises focus on entering JES2 commands to manage batch jobs and SYSOUT. In later modules, more advanced JES2 tasks associated with spool offload, JES2 checkpointing and JES2 initialization are presented. Labs – SDSF (z/OS) The Labs - SDSF course provides a range of exercises that can be run in your organization's mainframe training sandbox. Initial exercises focus on configuring SDSF screen defaults and then move on to managing batch jobs and their output using a number of SDSF options. Several exercises relating to searching and saving data from system and user logs are also provided. The last two exercises are for more experienced personnel and deal with JES2 initiator manipulation, and locating data that can assist if there is a JES2 resource shortage. Labs – z/OS Operator The Labs - z/OS Operator course provides a range of exercises that can be run in your organization's mainframe training sandbox. Exercises focus on the major tasks performed by the z/OS Operator including responding to system messages, displaying system activity and identifying the status of hardware and software. Labs – z/OS System Programmer – Basics (z/OS) These mainframe exercises provide scenarios where you are required to perform tasks and resolve simple and more complex z/OS-related problems, using your organization's training sandbox. Labs – z/OS UNIX (z/OS) These mainframe exercises provide scenarios where you are required to perform tasks and resolve simple and more complex z/OS-related problems, using your organization's training sandbox. Labs – IMS Labs – Database (Db2) (SQL) Labs – CICS Labs – Introduction to Mainframe Networks Labs – Introduction to z/OS Security COMING SOON!View series → Managed File Transfer for Operations This course describes how file transfer requirements have grown in importance over the years to the point where it has become an integral part of the daily workload processing. It then focuses on the elements of file transfer covering security aspects such as encryption and hashing algorithms as well as discussing the pros and cons of popular file transfer protocols. This knowledge is paramount when needing to manage and monitor a file transfer environment. The final part of this course discusses the common features associated with general Managed File Transfer (MFT) products, looking at tasks that need to be undertaken to ensure that compliance, internal regulations and SLAs are met, and that data integrity and security is maintained.View series → Parallel Sysplex – Fundamentals 2.4 This course begins by describing how the Parallel Sysplex evolved and why it is integral component of today's enterprise IT environment. Its key features are discussed in terms of the benefits it provides to the organization - system availability, data integrity, workload and data sharing, and automated recovery to name a few. A break-down of the major Parallel Sysplex components is then presented, describing their importance and how they can be configured.View series → PL/1 Fundamentals and Data Representation The PL/1 Fundamentals and Data Representation course introduces the PL/1 language. It explains the basic building blocks, particularly how data and program storage is represented and defined in the language. PL/1 Program Design Techniques The PL/1 Program Design Techniques course builds on the PL/1 Fundamentals course. It explains how to build a structured program in the language. It also describes many of the common built-in functions. PL/1 Accessing Data in Files The PL/1 Accessing Data in Files course explains how to use PL/1 to access and update data in both sequential and random access files. It also describes how to handle file and generic error conditions. PL/1 Preprocessor, Debugging and Advanced Coding The PL/1 Debugging and Advanced Coding Techniques course explains the reasons for program errors and describes the tools available to debug a PL/1 program. It also explores some more advanced coding techniques in the language.View series → Introduction to Project Management and PMBOK® Guide-Sixth Edition The Introduction to Project Management and PMBOK® Guide-Sixth Edition course describes what projects are, and how they are part of every day life. It looks at the main components of a project and how they can be broken down. Project Integration Management This course is suitable for anyone who needs to develop an understanding of Project Integration Management and Change Control principles aligned to the PMBOK® Guide-Sixth Edition. Project Scope Management This module is suitable for anyone who needs to develop an understanding of Project Scope Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Schedule Management This course is suitable for anyone who needs to develop an understanding of Project Time Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Cost Management This course is suitable for anyone who needs to develop an understanding of Project Cost Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Quality Management This course is suitable for anyone who needs to develop an understanding of Project Quality Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Resource Management This course is suitable for anyone who needs to develop an understanding of Project Resource Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Communications Management This course is suitable for anyone who needs to develop an understanding of Project Communications Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Risk Management This course is suitable for anyone who needs to develop an understanding of Project Risk Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Procurement Management This course is suitable for anyone who needs to develop an understanding of Project Procurement Management principles aligned to the PMBOK® Guide-Sixth Edition. Project Stakeholder Management This module is suitable for anyone who needs to develop an understanding of Project Stakeholder principles aligned to the PMBOK® Guide-Sixth Edition. CAPM Practice Test for PMBOK® Guide-Sixth Edition This practice test will help you prepare for the Certified Associate in Project Management (CAPM) or the Project Management Professional (PMP) certifications. It is important to note that both certifications have additional requirements with regards to project management experience and education. This test contains 150 multiple choice questions with detailed feedback. The feedback is shown after you answer each question. Each time you take this test you will be presented with a different set of questions drawn from a pool of over 200 questions. There is no time limit on this test as you are allowed to view the feedback as you answer each question. Note: The PMI CAPM Exam is a computer-based test with 150 multiple choice questions, 15 of which are unscored. Test-takers have three hours to complete the PMI CAPM exam.View series → Rational Developer for System z Basics Training Course This course introduces the learner to the IBM Rational Developer for System z (RDz) product describing the general tasks that can be performed using it, as well as its configuration and other features. A guide to RDz's GUI interface is provided including details on connecting to resources on a host system. Creating and Managing Applications Using RDz This course describes the methods used to create, manage and maintain applications under RDz. It provides details on the benefits of projects and subprojects within RDz and the application tasks that can be performed within this structure. RDz testing and debugging capabilities are discussed in detail, in particular the zUnit testing framework, and the IBM Integrated Debugger. The final module addresses the RDz features that allow you to create applications for Db2, CICS and IMS.View series → REXX with z/OS and TSO/E The REXX with z/OS and TSO/E course explains how REXX is used in TSO and z/OS environments. Introduction to the REXX Programming Language The Introduction to REXX Programming Language course introduces the REXX programming language and explains how it is run. It also reviews and describes the major elements that comprise a REXX program. REXX Keyword Instructions The REXX Keyword Instructions course discusses the common keyword instructions used in REXX coding and describes how looping and execution control instructions are invoked. REXX Built-In Functions The REXX Built-In Functions course describes the standard built-in functions that are available with REXX.View series → SDSF – Concepts and Operation 2.4 The SDSF - Concepts and Operation course describes the purpose of SDSF, providing details on how it is accessed and how you interact with it. The course then explains how data is located and the use of filtering commands to display specific information. Details of SDSF initialization and shutdown are provided with solutions to common problems. Finally, a description of SDSF logs and how they are used is covered. SDSF – Using SDSF to Control Job Processing 2.4 The SDSF - Using SDSF to Control Job Processing course describes how job activity can be displayed using the Input, Display Active and Status SDSF panels. It discusses how the attributes of jobs including their status, can be modified by overtyping existing values, or by entering commands. Control of overall batch job activity through the use of MAS, Scheduling Environments and Initiators is also discussed. SDSF – Using SDSF to Display, Manipulate and Print Job Output The SDSF - Using SDSF to Display, Manipulate and Print Job Output course describes how held and non-held output is displayed, and provides information on the commands that can be used to modify output attributes or delete the output altogether. Details associated with displaying and modifying printer attributes and activity are also covered. SDSF – Using SDSF to Manage System Resources and Devices 2.4 The SDSF - Using SDSF to Manage System Resources and Devices course describes the use of the IBM Health Checker and explains commands that can be used to run, delete, restore, activate and deactivate a check. Displaying and interpreting JES2 resource data is covered along with the process of handling system requests and action messages. Details associated with displaying and managing spool, and JES2 node and line activity are also explained. SDSF – Advanced – Tips and Tricks 2.4 This course contains many SDSF-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers several new areas of functionality associated with z/OS 2.3 and z/OS 2.4.View series → Introduction to Mainframe Security v2.4 This course provides the learner with a basic understanding of z/OS security. It introduces basic security concepts as they relate to z/OS, including the reasons for security, physical security and the Logon ID. It covers both traditional z/OS security issues such as data set protection and TSO/E, together with recent developments including LDAP and passphrases. Sections on security auditing, event recording, and a detailed explanation of the Authorized Program Facility (APF) are also covered.View series → RACF – Introduction 2.4 This course introduces the learner to IBM’s RACF security software, explaining how it has evolved and how it is typically used in z/OS, and can interact with non-z/OS workloads. It discusses the importance of security, and the types of resources it protects. The course then introduces the concept of user and group profiles and describes from a user perspective, RACF’s interaction with day-to-day user tasks. Examples showing how various users can interact with RACF are also provided. RACF – Defining and Managing Users 2.4 The “RACF - Defining and Managing Users” course details the skills that are required by a security administrator, programmer, or DBA in using RACF to secure systems and data. It explains how to define and maintain individual users within RACF, using several interfaces. RACF – Managing RACF Groups and Administrative Authorities 2.4 The “RACF - Managing RACF Groups and Administrative Authorities” course follows on from the “RACF - Defining and Managing Users” course describing how users can be connected to group profiles and can be assigned special privileged access. RACF – Protecting Data Sets Using RACF 2.4 The “RACF - Protecting Data Sets Using RACF” course describes how RACF is used to define access to z/OS data sets. Information on the profiles used to provide this access is also discussed in detail. RACF – Protecting General Resources Using RACF The “RACF - Protecting General Resources Using RACF” course describes how RACF is used to define access to system resources such as DASD and tape volumes, load modules (programs), and terminals. Details of the profiles used to provide access to these items is also discussed in detail. RACF – RACF and z/OS UNIX 2.4 The “RACF - RACF and z/OS UNIX” course describes the requirements for configuring security in a z/OS UNIX environment using RACF. It covers the creation and use of UID and GID definitions as well as file and directory permission bits and access control lists that are referenced when accessing those z/OS UNIX resources. RACF – Managing Digital Certificates 2.4 In the “RACF - Managing Digital Certificates” course you will see how encryption keys are used to securely manage data, and the standards that enforce encryption protocols. You will be introduced to various types of certificates and see how data that is stored in them. From a z/OS perspective you will see how IBM’s Digital Certificate Access Server (DCAS) provides password free access to that environment using a certificate. Commands used to generate and manipulate digital certificates, and keyrings is discussed in detail. RACF – For System Programmers 2.4 The “RACF - For System Programmers 2.4” course describes how the RACF database is structured and configured, and the skills needed to ensure that it runs optimally. RACF – For Auditors 2.4 The “RACF - For Auditors” course describes the various types of data center audits and discusses the role of an internal auditor when performing a RACF audit. It expands this to look at the general steps to ensure that RACF managed security is aligned with both organizational security standards, and external compliance regulations. RACF auditor privileges are discussed in detail describing how audit information is stored and the commands used to request the capture of specific events. The type of data that can be unloaded from SMF, and the RACF database, is explained along with details on how ICETOOL can be used to process this information to create audit reports.View series → CA-ACF2® – Introduction These courses describe how CA ACF2™ is used to protect and secure the system against accidental and malicious access and damage. It instructs the student on how CA ACF2™ works and how to define users, rules and parameters, to restrict access to the system and its resources. CA-ACF2® – Defining Environment Controls These courses describe how CA ACF2™ is used to protect and secure the system against accidental and malicious access and damage. It instructs the student on how CA ACF2™ works and how to define users, rules and parameters, to restrict access to the system and its resources. CA-ACF2® – Protecting System Access These courses describe how CA ACF2™ is used to protect and secure the system against accidental and malicious access and damage. It instructs the student on how CA ACF2™ works and how to define users, rules and parameters, to restrict access to the system and its resources. CA-ACF2® – Protecting Data Integrity CA-ACF2® – Protecting General Resources CA-ACF2® – Maintaining ACF2 CA-ACF2® For Auditors Service Oriented Architecture This course describes what Service Oriented Architecture (SOA) is and why businesses today are looking at implementing it. It outlines the components and architecture associated with an SOA environment and explains the challenges and barriers to SOA adoption.View series → ISPF (z/OS) – Using Online System Facilities – TSO/ISPF The Using Online System Facilities - TSO/ISPF course explains the purpose of TSO is and how it is accessed. It then describes how to log on to ISPF and provides details of navigation methods, program function key definition and explains how basic ISPF settings can be configured. ISPF (z/OS) – Managing Data Files and Definitions with ISPF/PDF The Managing Data Files and Definitions with ISPF/PDF course explains how to use the ISPF menu options to display the contents of Data Sets and how functions such as; copying, printing, renaming, and deleting are performed on these objects. ISPF (z/OS) – Maintaining Data in Files with the ISPF Editor The Maintaining Data in Files with the ISPF Editor course explains how the ISPF Editor is used to view, browse, and edit data within a data set. TSO/ISPF – Advanced – Tips & Tricks This course contains many TSO and ISPF-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers several new areas of functionality associated with z/OS 2.3 and z/OS 2.4.View series → Utilities – General Data Set Utilities 2.4 This course looks at the IEFBR14, IEHPROGM and IEBCOPY utilities and discusses how they are used to create, copy, and delete data sets. The JCL requirements for these utilities, along with their control statement syntax, is also covered in detail. Utilities – Data Utilities 2.4 This course looks at the IEBGENER, ICEGENER, IEBCOMPR, IEHLIST and DFSORT utilities and provides real-life examples describing how they are used to interrogate and modify data set content. The JCL requirements for these utilities, along with their control statement syntax, is also covered in detail. Utilities – Advanced – Tips and Tricks This course contains many Utility-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers several new areas of functionality associated with z/OS 2.3 and z/OS 2.4.View series → z/VM – Concepts, System Initialization, and Shutdown The z/VM Concepts, System Initialization and Shutdown course describes how virtualization, and in particular z/VM, has become more popular in Data Centers and examines the processes used for z/VM start-up and shutdown. z/VM – Monitoring and Controlling z/VM Operations The Monitoring and Controlling z/VM Operations course describes the tasks associated with displaying z/VM system status and activity, and management of z/VM resources. z/VM – Managing Guest Operating Systems The Managing Guest Operating Systems course describes the types of guests that can be installed under z/VM and the methods used to create, display and manipulate CMS files. z/VM – Identifying and Resolving z/VM Problems The Identifying and Resolving z/VM Problems course looks at the tools and methods used to gather information that assists with problem resolution, and discusses how performance issues and general problems are resolved. The processes and utilities used for backup and recovery are also described. Linux on z Systems Fundamentals The Linux on z Systems Fundamentals course discusses common Linux distributions for the z systems environment, how Linux is accessed, its operational implementation, and the general monitoring and management of Linux. The Administrator module provides an overview of the tuning, monitoring, and analyzing tasks performed by the Linux Administrator and contains tips for best practice in these areas.View series → VSAM – Introduction to VSAM 2.4 This course provides the learner with a basic understanding of the VSAM access method and VSAM data sets on z/OS. It introduces what VSAM is and when it is used. It includes information on the different VSAM data set types, when each is used, and their internal structure. Information on creating, copying, deleting, and managing VSAM data sets using JCL, TSO/E commands, the IDCAMS batch utility, and other tools is also covered. Finally, other products used to manage VSAM data sets are introduced - both from IBM and other vendors. VSAM – Managing VSAM Data 2.4 This course explains how VSAM data can be configured, allowing it to be shared by jobs, TSO users, UNIX processes and started tasks. It also addresses recovery options available when VSAM data is shared. A detailed explanation of parameters affecting VSAM performance is covered as well as the types of utilities used to capture VSAM performance statistics. VSAM – Advanced – Tips and Tricks 2.4 This course contains many IDCAMS utility tips, tricks, techniques, and best-practice items associated with VSAM data.View series → WebSphere – Introduction to Java and WebSphere Application Server This course introduces Java, one of the most popular programming languages in modern IT and its extended version, Jakarta Enterprise Edition (Jakarta EE), which itself is another popular version of Java that has found its place supporting and running back-end enterprise applications. The course also introduces IBM’s WebSphere Server Application and how it serves in implementing Java/Jakarta EE services within an enterprise IT environment. WebSphere – Introduction to Java and WebSphere Application Server on z/OS In this course you are introduced to how Java works in a z/OS environment and some of the tools available. The course then introduces you to WebSphere Application Server for z/OS and WebSphere Application Server for z/OS Liberty and their key features. The course then looks deeper into how WebSphere Application Server for z/OS and its Liberty variant interact and work with z/OS resources.View series → Introduction to the IBM Z Systems This course describes how IBM Z hardware has evolved to cater for today's enterprise data processing needs. Managing IBM Z Enterprise System Hardware This course looks primarily at the Hardware Management Console (HMC) and provides an overview of the functionality and features of this product. It also discusses common HMC tasks and the authorization required to perform them. IBM Z Hardware Models This course discusses current IBM Z enterprise, and LinuxONE, system models and their predecessors. It provides details on their components and their functionality.View series → IBM Z – Introduction to LinuxONE In this course, you will see how IBM has addressed evolving IT trends and embraced Linux, surrounding it with enterprise standard hardware and software. You will delve inside different LinuxONE servers and look more closely at how they are structured and how they can be configured. You will also see technical improvements from previous LinuxONE models and how the LinuxONE III LT1 compares to the z15, the entry-level LinuxONE III LT2, and the LinuxONE III Express. IBM Z – Hardware Models – z13 This course introduces the IBM z13 mainframe server, describing its capabilities and features. It then focuses on the key z13 hardware components and the I/O structure used to transport data through the system. IBM Z – Hardware Models – z14 This course introduces the IBM z14 mainframe server, describing its capabilities and features. It then focuses on the key z14 hardware components and the I/O structure used to transport data through the system. IBM Z – Hardware Models – z15 This course introduces the IBM z15 mainframe server, describing its capabilities and features. It then focuses on the key z15 hardware components and the I/O structure used to transport data through the system. IBM Z – Hardware Models – z16 This course introduces the IBM z16 mainframe server, describing its capabilities and new features. It then focuses on the key z16 hardware components and the I/O structure used to transport data through the system. IBM Z – Introduction to the IBM Z Systems This course describes how IBM Z hardware has evolved to cater for today's enterprise data processing needs.View series → z/OS – Concepts and Components The z/OS Concepts and Components course describes the evolution of mainframe computing and provides descriptions of the major components that comprise today's z/OS environment. Details of general z/OS processing concepts are also provided. z/OS – Initializing and Terminating the z/OS System The Initializing and Terminating the z/OS System course describes what actions occur as part of a z/OS system initialization, and then delves into the system data sets and configuration libraries responsible for defining z/OS system characteristics. The final module in this course simulates a z/OS system start-up and shutdown, describing the most common commands and operator responses. z/OS – Monitoring the z/OS System This course introduces z/OS commands that can be used to display the status and attributes of various z/OS tasks and devices. An overview of system monitoring tools and facilities such as RMF, z/OSMF, traces and EREPS are also provided as well as a description on how SMF data is created and controlled. This course also discusses the need for message suppression and describes how this is achieved. z/OS – Identifying z/OS System Problems The Identifying z/OS System Problems course explores some of the processes, commands, and tools that are used in identifying system problems. It describes how common system problems are recognized, and the steps that can be taken to assist with problem resolution, including dumps and analyzing the catalog address space. z/OS – Resolving z/OS System Problems This course describes the processes and commands required to resolve common z/OS system problems. It also describes when cancel and force commands should be used and how to handle command flooding. z/Architecture – Processing Workloads The z/OS Architecture - Processing Workloads course describes how today's z/OS system processes workloads, focusing on the concepts of address spaces showing how they provide the environment under which tasks can run. You will look at the different types of CPUs that can be configured in a z/OS system and see how programs issue instructions to the CPU. Diving down deeper you will then look at the CPU chips themselves and view the components that comprise them, looking at their involvement in processing work. Finally, the major components that comprise the mainframe's I/O structure are presented to show how work moves throughout the z/OS environment. z/Architecture – Memory, Address Spaces, and Virtual Storage Processor storage, real storage, or central storage. Whatever you call it, it is the memory where z/OS programs and their data need to reside before they can be processed, and like other mainframe resources it can be virtualized. In this z/OS Architecture - Memory, Address Spaces and Virtual Storage course, you will see how the address spaces discussed in the previous course, access and free the memory they require to process work. You will also see how virtualization of this resource occurs, which for IT specialists will provide them with knowledge to troubleshoot memory-related issues. z/OS – Advanced – Tips & Tricks This course contains many z/OS-related tips, tricks, techniques, and best-practice items that you may find useful in your day-to-day activities. It covers several new areas of functionality associated with z/OS 2.3 and z/OS 2.4. z/OS MVS Command Simulations A number of simulations are provided that the student can use to assess their skills and knowledge in relation to the entering of commands, and interpretation of output produced, when monitoring and manipulating MVS system resources. z/OS System Shutdown and IPL Simulations Two simulations are provided that the student can use to assess their skills and knowledge in relation to the manual shutdown and start-up of a z/OS system. SMP/E – Introduction to SMP/E Ensuring that all of your organization's z/OS system software is current, and that any fixes and improvements have been applied, is paramount to maintaining system availability. This course looks at the SMP/E software and how it is used by the z/OS Systems Programmer to provide best practice installation, management and reporting of z/OS system software.View series → z/OS Connect EE – IBM z/OS Connect Enterprise Edition V3 The z/OS Connect EE - IBM z/OS Connect Enterprise Edition course discusses the need for organizations to open up their mainframe data to cloud, mobile and web customers and describes how z/OS Connect EE provides this capability. The course also covers what IT specialists need to know on how resource access and data communication is performed by z/OS Connect EE.View series → z/OSMF – The IBM z/OS Management Facility This course provides the learner with a basic understanding of the z/OS Management Facility (z/OSMF). It begins with basic concepts: what z/OSMF is, why it is used, how it is configured, and first steps in logging on and using it. The course then delves further, providing the student with the skills needed to use all the z/OSMF features: problem management, configuration of WLM and TCP/IP, software management and deployment, capacity provisioning, performance monitoring, and workflow creation.View series → z/OS UNIX – z/OS UNIX System Services Basics This introductory course looks at the evolution of UNIX on the mainframe and describes how it interacts with today’s z/OS system products. It provides an overview of the z/OS UNIX System Services major components and shows typical workload processing in this environment. Details of the various file systems that are supported under z/OS UNIX are explained along with scenarios on when they would be used. z/OS UNIX – Interacting with the z/OS UNIX System This course discusses commonly used interfaces to z/OS UNIX and then concentrates on common tasks and how they are performed within those interfaces. Interfaces covered include: OMVS Shell, ISPF Shell, ISPF’s z/OS UNIX Directory List Utility, and batch processing. z/OS UNIX – Working with z/OS UNIX This course discusses commonly used interfaces to z/OS UNIX and then concentrates on common tasks and how they are performed within those interfaces. Interfaces covered include: OMVS Shell, ISPF Shell, ISPF’s z/OS UNIX Directory List Utility, and batch processing. You will also see how shell scripts are created, and will be introduced to code that is used to perform common z/OS UNIX tasks. Finally, you will discover how file security is structured and managed within z/OS UNIX, and look at some of the methods used to perform backup and recovery of z/OS UNIX files.View series → The z/VSE Basics course discusses mainframe operating systems and identifies the types of organizations using z/VSE. It provides an overview of the z/VSE infrastructure describing the personnel likely to interact with it and provides examples of typical data processing on this system. z/VSE for Operators The z/VSE for Operators course looks at z/VSE from an Operations viewpoint, describing how they access the system and perform startup and shutdown processing. A description of common operator tasks, and the commands used to display, monitor, and resolve problems associated with the z/VSE system are also provided. z/VSE Machine Learning Introduction This course is designed for those working with organizations looking to implement Machine Learning solutions. It begins by explaining what Machine Learning is, how it works, and how organizations can benefit from it. The course then focuses on IBM's Machine Learning for z/OS solution, describing its features and components.View series → The Zowe Fundamentals course begins by describing the features of Zowe, and the major components that comprise this product: Zowe Application Framework, Zowe Command Line Interface (CLI), Zowe Explorer, Zowe Desktop, and API Mediation. Examples show how users interact with these Zowe components and the advantages of using them. The course then dives into more detailed information, describing how Zowe and its components are started and run, describing any prerequisites that are required. The default capabilities of each component is presented as well as methods that can be used to create or import additional functionality. The final module in this course looks at the installation and customization possibilities when dealing with Zowe server and client components.View series → Courses marked with "★" are available to clients with Enterprise licenses only. Note: these elearning courses are online, on-demand and self-paced. They are accessed via the MyInterskill LMS or via your organization's LMS. Interskill licenses access to all the courses for a year. Courses are not available individually.
<urn:uuid:0025eeeb-2452-4711-a715-e26b8ca73658>
CC-MAIN-2022-40
https://interskill.com/catalog/courses/?noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00349.warc.gz
en
0.895679
19,715
2.953125
3
There is a famous Dilbert cartoon from a few years ago in which Dogbert advises Dilbert’s company that they can generate revenue from the information they hold about their customers, but first they have to dehumanise the enemy by calling them data. This pretty accurately summarises one approach to data management that has pervaded the early years of this Information Revolution. However, as the tools and technologies for gathering, analysing, and acting on information become increasingly powerful, we find ourselves facing a tipping point in our love affair with these technologies. This tipping point is all the more pronounced as we consider the impact of data-driven processes on democratic processes and human rights around the world. The question of ethics in information management is often conflated with the challenges of managing data privacy, particularly in an increasingly interconnected information landscape. However, privacy is the entry point for any meaningful discussion of ethical issues in information management. When we begin to look at the various ethical issues that arise in the implementation of ‘big data’, we see that the real privacy issue is not simply the potential loss of privacy and individual agency in an age when we are transparent to the algorithms, but rather the issues that arise when we must trade off privacy against other issues or benefits. If information should be processed to serve mankind (as Recital 4 of the General Data Protection Regulation tells us), as we dig deeper into the ethical issues, we find further questions of ethics and ethical conduct that impact on that fundamental principle of ethical information management. Big data raises ethical questions In Chapter 3 of our book, Ethical Data and Information Management, we look at examples such as the ethical questions raised when the tools for big data analytics can only run on technology that is affordable in the First World, a problem which has led one data scientist and blogger to explore the potential for what he calls “Cheap Data Science”. The ethical question here is simple: is it fair that the future, to paraphrase the science fiction author William Gibson, “is here, but not yet evenly distributed”, and that the very people who might best benefit from improved data analytics of issues such as soil erosion or the spread of disease, cannot because of the barrier to entry created by the bias about system performance and network capabilities that developers living in affluent Western economies have baked into the design of the very technologies that data analysts in developing countries would benefit from having. Is it ethically responsible or sustainable to design software and tools that only work reliably in wealthier developed nations? We also look at the potential benefits and harms of granular tracking and microtargeting of students at university level, in which the prevailing mindset of ‘more data is better’ has lead to the development of technologies that analyse and predict student behaviour, performance, and potential to drop out. However, there is every reason to believe that the headline success stories are simply describing correlation rather than causation. This raises additional ethical issues in the data-driven world where success stories are often not subjected to the rigorous scrutiny that they should otherwise be subjected to. In the case of the burgeoning EdTech sector, the unanswered question that needs to be addressed is whether the investment in technologies to track students and their performance and their interactions with course work is the cause of higher grades and better performance as claimed, or whether students who would perform better and get higher grades are attracted to courses that have these cutting-edge facilities available? Is the relationship described causation or correlation? Furthermore, even if there is a causal relationship, there has been limited research on the potential downsides of this type of invasive student tracking. The research that has been done raises concerns about the impact on pedagogic methods in universities, and also raises concerns about student privacy and chilling effects on independence of thinking and expression among students, and on the choices that students or parents might make about course selection or their academic performance. Ethical concerns of algorithmic bias The issues of algorithmic bias in artificial intelligence (AI) also give rise to ethical concerns, particularly when the questions of the inherent bias in training data are taken into account. While these algorithmic processes are often hailed as beneficial to society through time and cost savings, often they come with a hidden cost. For example, in Chapter 4 of our book we look at the problems with systems like COMPAS, a sentencing support system used in the U.S. courts system, which journalists at ProPublica found to be ‘remarkably unreliable’ in its predictions. White defendants were nearly half as likely to be flagged for potential risk of reoffending as African-American defendants and the sentences recommended tended to be longer. The question of how we train AI systems is, in and of itself, an ethical choice. In many respects when we are developing AI systems we are acting as parents, imparting values and supporting the development of ways of thinking about issues and inferring facts from the available data. The quality of the models we develop is directly influenced by the quality of the models we are developing from. In the example of COMPAS, a likely root cause of the inherent bias in the system is, for want of a better expression, the inherent bias in the system. Historical court rulings and case studies were used to train the AI. Historically, certain ethnic groups have faired better or worse in the US criminal justice system. Similarly, facial recognition machine learning inherits biases from the images used to train it. Other aspects of algorithmic bias are more subtle in their societal impact. When women tend to be shown lower-paying job adverts and hiring algorithms replicate similar results, this is an undesirable social and societal effect that raises ethical questions of fairness in the work place. The emergence of Lethal Autonomous Weapons Systems raises the potential for armed conflicts to become more commonplace as the risk to human life (on the combatant side) will be reduced through the deployment of autonomous weapons platforms. Of course, it is not all doom and gloom. There are also many examples of the use of machine learning, AI, and data analytics to support and enhance the human condition. Companies such as Microsoft and Accenture have demonstrated smartphone based assistive technologies for the visually impaired which can assist with a range of tasks and narrate information to the user. Development such as these have the potential to significantly benefit the lives of countless people and are impossible without the analytics, machine learning, and AI technologies that are at the cutting edge of our data-driven world. However, there are open ethical questions to be resolved. For example, implementing facial recognition into these technologies might mimic the human eye and brain identifying a familiar face in a crowd. But the location of that processing and matching and who else has access to the biometric data of the people that you know and want to recognise becomes a balancing act. After all, with an assistive technology the matching will not happen inside your skull but in a web-based service hosted either on a device or in a cloud environment. Integration of ethics into information management In the Information Revolution we are generating, capturing, and processing an increasingly wide array of data about people, products, locations, events, and the relationships between them. As the potential to impact the lives of people in increasingly subtle but significant ways continues to increase, it is essential that we put an ethical core at the heart of the data driven world. This does not, however, mean we need to invent a new discipline or develop a ‘new ethics’. Many of the questions we struggle with today have been discussed for many thousands of years. The philosopher Martin Heidegger famously extolled the need to recognise technology as a means to an end, not an ends in and of itself. Immanuel Kant equally famously exhorted us to treat people as an ends in and of themselves, not simply a means to an ends. Plato decried the impact of new technologies on the way knowledge is codified and imparted (he was talking about the development of writing, but that was the ‘big data’ of its day). What we need to do is to incorporate fundamental ethical concepts and principles into the defined data management disciplines we already have. Anything else would simply be reinventing the wheel. The integration of ethics into Data Governance, its influence on Data Quality management, and the need to explicitly recognise the motivation for data processing activities are all key issues that need to be addressed. Appropriate mechanisms need to be introduced into organisations to ensure an effective alignment between the Ethic of the Individual, the Ethic of the Organisation, and the Ethic of Society. After all, if the Ethic of the Organisation is to consider all customers nothing more than “data”, then it will be difficult for any individual to act outside that ethical frame, even when the organisation is at odds with the Ethic of Society and is being lambasted for failing to control fake news and other issues. Data ethics must be core Organisations that succeed in addressing this challenge will develop a strong sustainable competitive advantage as they will find it easier to attract and retain both staff and customers. Organisations that don’t will find themselves becoming increasingly counter to the Ethic of Society as they find they can only attract and retain staff on the more extreme negative end of the Ethic of the Individual. With higher staff turnover and a higher chance of being blind to an ethical crisis, such organisations will ultimately fail. Another consideration is the pace at which legislation lags technology innovation at times. Legislation and regulation are usually only considered when the Ethic of Society has been so outraged by the actions of an organisation that legal sanctions are imposed or legislation for such sanctions is introduced. By adopting and adapting to an ethical information management culture, organisations will stay ahead of the requirement for regulation by simply striving to do the right thing in the context of Society, not just the Organisation.
<urn:uuid:be6d260d-ba56-4d47-8e75-50013f4d82aa>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-privacy/ethics-at-the-core-of-a-data-driven-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00349.warc.gz
en
0.954063
1,997
2.890625
3
Area Code Emergency Call Routing Area Code Emergency Call Routing is a routing option for call centers that receive calls from unknown end-users. For example, individuals that press “0” to get to an operator instead of dialing 911. If an operator sends such a call to Bandwidth and there is not a registered 911 address on file then, unless the caller can provide their location, the call will be routed based on the area code and prefix of the unknown end user’s telephone number. How Bandwidth is Involved with Area Code Routing Bandwidth uses area code routing as a way to get a caller to the right PSAP based on the phone number’s area code and the prefix of the telephone number in cases when a registered 911 address is not available for a given phone number. This gives call centers a reasonable, simple, and relatively effective way to attempt to determine where to route a call when the address is not known. What Are the Benefits of Bandwidth’s Area Code Routing Call centers can benefit from using Bandwidth’s area code routing feature, because it allows for callers to be directed to an operator that is more likely to be within their area in cases when there is no registered address.
<urn:uuid:2f09024d-bb55-4d87-a4d9-e0095483ca3a>
CC-MAIN-2022-40
https://www.bandwidth.com/glossary/area-code-emergency-call-routing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00349.warc.gz
en
0.913582
259
2.53125
3
This chapter describes the management of IPv4 and IPv6 blocks, networks, IP addresses, and IP address discovery and reconciliation. - IP block—a block is a range of IP space. IP blocks may contain other IP blocks and networks. An IP block must be contained within a configuration or within a parent IP block. - Network—a network is a group of IP addresses that can be routed. Networks may contain only IP addresses. A network must be contained within an IP block. - IP address—the actual IP address leased or assigned to a member of a network. An IP address must be contained within a network. When creating IP address space, you begin by defining IP blocks, then you create networks within those blocks. You can then manage the addresses within the networks.
<urn:uuid:3835ceeb-88e8-4cd5-8f43-d1b51136b725>
CC-MAIN-2022-40
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/IP-address-space/9.2.0
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00549.warc.gz
en
0.914825
160
3.59375
4
The New Zealand High Court recently ruled that a former cricketer accused of match-fixing over Twitter can pursue his Twitter libel case. Lalit Modi, a cricketing administrator, took to Twitter to accuse Chris Cairns, a former cricketer, of match-fixing on Twitter. Cairns in turn filed a lawsuit, claiming that Modi’s tweet was libelous. Background on Defamation – Libel and Slander In most common law countries such as the U.S. and Canada, there are two different forms of defamation: libel and slander. A determination of whether the offense itself constitutes libel or slander depends on the particular medium that conveyed the message. Libel offenses may occur for written, broadcast or otherwise published words. Slander offenses may occur through audible or transitory comments. Twitter and the Test for Libel As Twitter is a social platform wherein users submit updates or tweets in the form of 140 characters or less, the particular type of defamation that is on the rise is libel. Generally, in order to establish libel, a plaintiff must show the following: a) the defendant made a false statement specifically about the plaintiff;b) the statement is published; and c) the plaintiff suffered actual harm from the statement, such as damage to his, her or its reputation (or negative financial effects). Defenses are available against the claim of libel. For example, if the defendant can prove that the statements made were indeed accurate, then the defendant was simply making an observation that was true and not defamatory in nature. Low Number of Twitter Followers Does Not Preclude Libel In the New Zealand case, Modi argued that the action should be dismissed because so few Twitter users were likely to have seen the message. However, the High Court rejected Modi’s argument, stating that the number of people who saw the message was only one of a number of factors to consider in a defamation case. Instead, Justice Tugendhat found that “a real threat in a case such as this was that the statements at the centre of the libel claims might be more widely disseminated, and that the measure of the damage to the allegedly libeled person is about more than just the number of people who saw the original post.” Twitter-Based Libel in the News This latest lawsuit is just one of the increasing numbers of cases stemming from the use of Twitter. This is no surprise, given the growing popularity of social networking tools such as Twitter. The first Twitter-related libel action seems to have occurred in March of 2009, when singer Courtney Love was sued for libel by her former fashion designer, Dawn Simorangkir. After a dispute involving money, Love took to her Twitter account to accuse Simorangkir of being a liar and a thief. As a result of those tweets, Simorangkir sued the singer for libel, claiming that the singer carried out “an obsessive and delusional crusade of malicious libel against her on Twitter, adding insult on MySpace and other websites.” Twitter-related litigation extends beyond the scope of celebrity. Pizza Kitchen, a pizza restaurant in the U.S., was sued for libel in September 2009 for posting various tweets accusing Low and Tritt, a marketing firm, of being crooks and thieves. According to Low and Tritt’s claim, Pizza Kitchen tweeted: “Don’t EVER use Lowandtritt [sic] marketing firm.” and “Crooks — stolen email list and have tried to pressure me by threat of lawsuit to sign a license agreement to use their marketing materials.” The next day, Pizza Kitchen added the following posts: “Lowentritt marketing firm has done it again” and “Can you believe that they have not only stolen my email list but have now hacked Pizza Kitchen’s facebook page taking it offline?” As a result of the tweets, Low and Tritt filed a US$2 million libel suit against Pizza Kitchen. In yet another case, in July of 2009, Horizon Group Management, a Chicago property firm, filed a libel suit against a tenant who complained about her “moldy apartment” on Twitter. The tenant tweeted: “Who said sleeping in a moldy apartment was bad for you? Horizon realty thinks it’s okay.” As a result of that tweet, Horizon Group Management filed a libel suit, claiming that the tenant “maliciously and wrongfully published the false and defamatory tweet.” This case was recently dismissed, as the judge felt the tweet was too vague to meet the legal standards of libel. While the tenant mentioned “Horizon Realty” in her tweet, it never specifically referred to Horizon Realty located in Chicago or Illinois. As the Internet is ubiquitous in nature and Twitter is international in scope, it is possible that her reference to Horizon Realty could have applied to any company using the name “Horizon.” Accordingly, Horizon Group Management was not capable of proving that a false statement was specifically made against it. Classic Defamation Framework Encompasses New Technology Considering that Twitter and social networking in general are relatively novel services, there is a lack of jurisprudence involving these new and emergent mediums available to the courts. However, the courts seem readily adept to deal with these issues by treating libel actions involving Twitter as general libel suits, requiring all of the elements of a libel action to be proven in order for the case to be successful. Caveat Actor: Tweet With Caution As the cases described above clearly demonstrate, Twitter has the capacity for providing an easy outlet for individuals to vent about their daily travails and perceived insults. However, users should exercise caution. Indeed, they need to be cognizant that despite the ease with which the need for their emotional catharsis can be satiated, there is an audience, and there is an ever-looming possibility of causing someone serious harm. Ultimately, even though a tweeter may be in a private home or office, tweeting is a public act and should be treated with all of the deference that other public acts are accorded. Once a tweet has been posted, it cannot be recalled, and serious and deleterious ramifications may result to an individual or a company. Javad Heydary, a columnist for the E-Commerce Times, is chairman and managing director ofHeydary Hamilton. His business law practice focuses on commercial transactions, e-commerce and franchising law. Heydary is also managing editor of Laws of .Com, a biweekly publication covering legal developments in e-commerce.
<urn:uuid:26fcfcf1-bb8c-4394-a21e-f37cc67c9cf3>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/defamation-in-140-characters-or-less-71300.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00549.warc.gz
en
0.95625
1,388
2.578125
3
How Did Snowden Breach NSA Systems?Security Firm Offers Its Take on the Insider Job Figuring out how Edward Snowden breached National Security Agency computers is sort of like solving a puzzle. Take public information, such as the congressional testimony of the NSA director and Snowden's own words, and match it with an understanding how organizations get hacked, and the pieces seem to fall into place. Security software maker Venafi says it used that approach to conclude that Snowden fabricated secure shell keys and digital certificates to gain access to documents on NSA computers he had no right to access. Secure shell, or SSH, is a cryptographic network protocol used to secure a channel linking two computers over an insecure network. If he had not used encryption, he absolutely would have been caught in his tracks right way. Jeff Hudson, Venafi's CEO, challenges the NSA and Snowden to prove him wrong. An NSA spokeswoman declined to comment on Venafi's analysis, referring comments to the Department of Justice, which is conducting the investigation into the Snowden leaks. DoJ also declined to comment. Venafi isn't the first organization to offer its take on how Snowden breached NSA computers, leaking stolen data to reveal secrets about NSA surveillance programs (see NSA E-Spying: Bad Governance). The news service Reuters reports that Snowden used login credentials and passwords provided unwittingly by colleagues at a spy base in Hawaii to access some of the classified material he leaked to the media. Reuters, citing a source, says Snowden may have persuaded about two dozen fellow workers at the NSA regional operations center in Hawaii to give him their logins and passwords by telling them they were needed for him to do his job as a computer systems administrator. Exploiting Systems Administrator's Privileges But Venafi went further. Employing Lockheed Martin's Kill Chain model - which identifies patterns that link individual intrusions into broader campaigns - Venafi in its analysis surmises that Snowden employed existing systems administrator's security privileges to determine what information was available and where it was stored. Then, he gained unauthorized access to other administrative SSH keys and made it look as if he could be trusted and gain access to files and systems he wasn't authorized to see. "This is relatively easy to do if the organization has not protected and secured these technologies, the capabilities," Hudson says. "The NSA hadn't, and most global 2000 companies haven't." NSA Director Gen. Keith Alexander told Congress that Snowden was able to fabricate digital keys because of the agency's failure to detect anomalies, according to Venafi's report. "Venafi's analysis of statements from Gen. Alexander in congressional testimony gives credence to the theory that Snowden generated credentials," says Richard Stiennon, a security analyst and author of the book Surviving Cyberwar. Hudson, in an interview with Information Security Media Group, says Snowden exploited security technologies to move from one computer to another. "These systems gave him greater and greater privilege, and greater and greater access," Hudson says. "What he did was use the classic attack method: he surveilled the situation, he targeted the data he wanted; he got onto those systems; he exfiltrated the data." With massive amounts of data, Snowden needed to transfer information among systems undetected, and he apparently did that by encrypting the data he pilfered, according to the analysis. The Venafi analysis quotes Snowden as saying: "Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on." By encrypting the data, Hudson says, Snowden was able to keep the transfer of top-secret data hidden from the NSA. "If he had not used encryption," he contends, "he absolutely would have been caught in his tracks right way." Hudson says Snowden also altered systems' log files to camouflage his malicious actions. Of course, being in the business of selling software and services to secure cryptographic keys and digital certificates provides Venafi with a financial incentive to warn other organizations about the insider threat posed by the likes of Snowden. Still, cybersecurity concerns Venafi presents are worthy of consideration. Is Venafi objective in its analysis? You can be the judge of that. Let us know what you think by commenting in the space below.
<urn:uuid:fdfb69f5-59b2-4b49-a58c-c260400f976e>
CC-MAIN-2022-40
https://www.govinfosecurity.com/blogs/how-did-snowden-breach-nsa-systems-p-1578
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00549.warc.gz
en
0.965626
868
2.53125
3
An Eclipse project consists of a directory, project settings files and at least the contents of the directory. This means that only one Eclipse project can exist in any one directory. Eclipse projects build their sources to just one form of output (such as an executable, an .exe, or a dynamically loaded library, a .dll) which you control using the project's build configurations. Projects have a type (such as JVM COBOL) and you create them using a wizard for the appropriate project type. The type of the project determines how the project node is decorated (what icon is used for it) and what view it is opened in inside the main Eclipse window - for example, the COBOL Explorer view for COBOL projects.
<urn:uuid:5f73fa74-703d-498d-91a2-e0f70dbd8422>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/visual-cobol/vc50/EclUNIX/GUID-B37C9C9F-F99F-4432-9B60-F479E1375D64.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00549.warc.gz
en
0.903852
151
2.84375
3
Question 221 of 240 Which two statements about header attacks are true? (Choose two.) A. An attacker can use IPv6 Next header attacks to steal user data and launch phishing attacks. B. An attacker can leverage an HTTP response header to write malicious cookies. C. An attacker can use vulnerabilities in the IPv6 routing header to launch attacks at the application layer. D. An attacker can execute a spoofing attack by populating the RH0 routing header subtype with multiple destination addresses. E. An Attacker can use HTTP Header attacks to launch a DoS attack. F. An attacker can leverage an HTTP response header to inject malicious code into an application.
<urn:uuid:6314440c-db9f-40ca-a174-ff16fd6e29c2>
CC-MAIN-2022-40
https://www.exam-answer.com/cisco/400-351/question221
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00549.warc.gz
en
0.824175
173
3.140625
3
Saturday, October 1, 2022 Published 2 Years Ago on Friday, Sep 04 2020 By Adnan Kayyali A COVID-19 research collaboration between Virginia Tech and the University of Georgia is taking place to investigate the tendency of the virus to bind to carbohydrate-based polymers, such as heparin. If successful, they could be used to develop virus-trapping gels and surfaces, used in various protective and preventative health products and even diagnostics. In an article by Virginia Tech Daily, the researchers explain their hypothesis and their hopes for future development, as well as what tools they will use to achieve them. ‘The virus passes a large number of carbohydrate-based molecules on its way into the cells in our body,” said Maren Roman, Associate Professor of Sustainable Biomaterials in Virginia Tech’s College of Natural Resources and Environment. “If we can determine which carbohydrates or carbohydrate chains the virus binds to, we can develop materials that work like a fly trap and capture virus particles before they get into our bodies.” The anticoagulant heparin is what is known as a blood thinner, and is used widely to treat blood clotting disorders. However, patients suffering from COVID-19, when given heparin, showed a much lower risk of being killed by the virus. The Universities’ COVID-19 research has been met with optimism by the National Science Foundation, and has granted the teams from Virginia Tech and the University of Georgia $200,000 RAPID COVID-19 grant. “We will use cutting-edge computational tools to study which carbohydrate molecules bind most strongly to the virus,” said Professor of Biochemistry and Molecular Biology, and Chemistry, at the University of Georgia, Robert Woods. “This work is a natural extension of our prior work on the virus, which has given us detailed computer models of one of its surface proteins, namely the Spike protein. This protein is responsible for the virus’s ability to enter cells and its tendency to bind to carbohydrates.” “Our ability to successfully stop this pandemic depends on researchers from different fields and even institutions joining forces and collaborating.” Roman said. “Only together will we figure this virus out.” With COVID-19 research well underway, one can only hope for a breakthrough in preventative and lifesaving treatments. Artificial intelligence (AI) systems are already seeing huge adoption by businesses big and small. Its ability to enhance marketing tactics, customer service, business strategy, market analytics, preventive maintenance, autonomous vehicles, video surveillance, medical, and much more. Making AI technology invaluable across all sectors. Here are the fastest advancing AI trends to watch for in 2022. Small […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:bcf120a4-cba9-401d-b991-5e4f49d9c250>
CC-MAIN-2022-40
https://insidetelecom.com/covid-19-research-to-develop-virus-trapping-materials/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00549.warc.gz
en
0.937602
615
3.375
3
Summer brings us the joys of long days, warm weather, and perfect opportunities to go to the beach. But it also brings us oppressive heat, humidity, and thunderstorms. Lightning strikes occur more frequently in the spring and summer than in any other season. Electrical storms can cause power surges and blackouts. When your power goes out, you usually expect everything to turn back on again as if nothing had happened. (As any readers affected by the Northeast Blackout of 2003 will recall, though, it may take a day or two.) But for this recovery client, everything came back on but their hard drive. The client removed the drive from their PC and attempted to use data recovery software, but could not recover anything. The client came to us for our power outage data recovery services. Data loss after a power outage isn’t extremely common, but there are many ways it can happen. Most people know the pain of suddenly losing a document or project before they have had a chance to save their most recent changes if the program crashes or the power goes out. This is logical data loss, meaning it doesn’t involve any physical damage to the storage device. Many programs have automatic recovery capabilities that can help to ameliorate the effects of data loss to varying degrees of success. Some filesystems such as Linux Ext4 and Macintosh HFS+ have journaling capabilities, which can prevent data corruption in the event of a sudden shutdown. How Power Outages Can Cause Physical Failure Data storage devices, especially hard disk drives, don’t take kindly to sudden shutdowns. If a storage device loses power without warning, the boot sector can become corrupted. (This is why you’re always supposed to safely eject an external drive or USB thumb drive before unplugging it.) For hard disk drives, there is an added danger. When you normally shut down a computer or eject an external drive, the computer sends signals to the hard drive to prepare it. If you cut the power without warning, the hard drive’s read/write heads may crash into the platters and cause the drive to fail. In this power outage data recovery case, the client’s hard drive would not boot up. While the client heard no funny clicking noises coming from the drive, that did not rule out the possibility that their Western Digital hard drive had physically failed as a result of the blackout. Power Outage Data Recovery Whenever a hard drive loses power without warning, there is a risk of the drive becoming damaged. Upon evaluating the client’s hard drive, we determined that this had happened to the drive. The read/write heads had not failed completely, but were in degraded condition. Our engineers refer to these kinds of heads as “slow” heads, because their abilities to read and write data quickly have greatly diminished. Slow heads are often on the verge of failure. Because slow read/write heads cannot perform their duties at a hard drive’s natural pace, the drive will fail to perform even the most basic of tasks in a normal setting. In many cases , these heads must be replaced in our cleanroom. But in some situations, our engineers can talk to a drive with slow heads just by speaking to it at its own pace. Power Outage Data Recovery Case Study: Western Digital AAKX Drive Model: Western Digital WD2500AAKX-00U6AA0 Drive Capacity: 250 GB Operating System: Windows Situation: Drive failed to boot after power outage, recovery software failed Type of Data Recovered: Quickbooks, documents, Excel spreadsheets Binary Read: 46.5% Gillware Data Recovery Case Rating: 9 This is something our own specialized data recovery tools allow us to do. By skillfully altering the operation parameters of HOMBRE, our data recovery engineers could slow down the read time enough to let the heads read data from the drive’s platters. The read/write heads did not perform perfectly, but after reading 46.5% of the drive’s sectors, our engineers had recovered 99.9% of the user’s files. The vast majority of the user’s critical Quickbooks files, documents, and spreadsheets were completely recovered. Our engineers rated this power outage data recovery case a 9 on our ten-point case rating scale. Why Gillware Does Not Recommend Data Recovery Software We generally do not recommend that people with hard drive failures attempt to use data recovery software on their own. There are plenty of software tools out there, many of which are cheap (or free) and easy to use. But we’ve seen far too many recovery cases come to us that would have gone much more smoothly had the client not tried to use software on their own to recover their data. Like with most DIY data recovery methods, these things can do more harm than good. Software data recovery tools cannot help at all if a hard drive has suffered a mechanical failure. File recovery software cannot fix a failed or failing read/write head. It cannot unjam a stuck spindle motor. It cannot undo platter scratches. Running a physically failed hard drive in order to use recovery software can—and often will—cause even further degradation, resulting in data loss which may be unrecoverable. Data recovery software can only recover data that has been lost due to logical failure, such as file deletion, accidental reformatting, or filesystem corruption. If the logical issue is not very complex, recovery software will often do its job just fine. If it is a complex logical issue, though, these tools will often fail. And if the software is installed and run on the same drive that has lost data, the mere act of installing and running the software can irreversibly overwrite critical data. At Gillware Data Recovery, our engineers use HOMBRE, an intelligent, fault-tolerant recovery hardware and software package of our own design, to assist in our data recovery efforts. With these tools, we can move at a hard drive’s own pace, even if its components are in rough shape, and recover our clients’ data from complicated logical data loss situations.
<urn:uuid:95efee41-c4ed-44db-b44c-1a6a5b5a3d18>
CC-MAIN-2022-40
https://www.gillware.com/hard-drive-data-recovery/power-outage-data-recovery-wd-aakx/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00549.warc.gz
en
0.949884
1,270
2.671875
3
There are three levels of the internet that are the surface web, the deep web, and the dark web. The things which are accessible online to everyone can be found on the surface web. The things you cannot usually find on the regular search engines like Google are found on the deep web. The content found on the deep web cannot be accessed by normal browsers. The surface web covers less than 1 percent of the internet. While the deep web covers almost 7.5 petabytes of the internet, which makes it the deepest level of the internet. Misconceptions about Deep web: The deep web is mistaken for the dark web. They are totally different from each other. The important thing to be noted is that deep web is legal and also the other conceptions about it being mischievous are incorrect. The deep web does not have anything to do with illegal business; rather it is important for the security and protection of our private data over the internet. Imagine if all your data would be accessible to anyone without any authentication or privacy. Hence deep web contains all the sensitive data and only users who have certain passwords or codes can access it. When you access your Gmail account, bank account, or any Facebook account, it is all done through the deep web. Do VPNs Matter On The Deep Web? It is not necessary to use a VPN while using the deep web, but it could be useful in increasing the security of your personal information and actions on the web. To protect your personal information from data thefts, cybercrime, and government spying agencies, it is important to make your data safe. By using a VPN, your real IP address would not be visible, which would make all your activities on the web anonymous. VPN protected internet surfing maintains your security over the internet thus it becomes hard for your Internet Service Provider to track your actions on the web. Tor (The Onion Router) is the commonly used VPN on deep web and dark web. How to Browse Deep web Safely? We often browse the deep web for different purposes. If you have logged in to your personal student portal to check your grades, then it would be better if you make sure that your personal information is safe and secure. And for that, you must have the software installed which ensures protection and security of your data. VPN and Tor are used for protecting and encrypting all the users’ data on the web. While using Tor and other browsers there are some important things which you should keep in mind. - Tor focuses on users’ anonymity on the online web therefore it can prove to be safe for sharing confidential data with your family and friends. Reporting anything would also be anonymous. - While using Tor, it is important to update all the applications of Tor and also your device to get the best results. - While using Tor, avoid giving your personal or professional email ID on websites as it can reveal your real identity. What is Tor? Tor is a publicly accessible browser that is used to make surfing on the deep web anonymous. The software is user-friendly. It hides the users’ IP addresses. Tor is not much effective if used alone, hence it is also necessary to use VPN with it. Use VPN for More Protection The IP address provided by the Tor can still be tracked, but by connecting to VPN with tor, it will hide the IP address given by Tor. This makes it almost impossible for any illegal authority to track your actions even if they find out your Tor IP address. There are a lot of advantages by setting up VPN - Only the Tor IP address is visible to the VPN provider. Your real IP address stays unrevealed. - All the data is passed through Tor. - The data is encrypted before entering and exiting Tor hence, it provides protection from any volunteers of Tor that may destruct the data. - The VPN can show a fake location and hide your real location. It is possible to access websites that block Tor exit nodes. Tor over VPN The main objective of Tor is to make the user anonymous on the web, while VPN mainly focuses on maintaining the privacy of users’ data. By connecting them together it decreases the chances of leaving behind any traces of the user. If you install a VPN and then add the Tor browser, then this is known as using Tor over VPN. All the data from the clients’ server passes through the VPN server and then it goes into the Tor network. The tor network passes your data through several nodes and layers by keeping your identity anonymous. At each node, the encrypted data is decrypted and then it reaches its destination. Your Internet Service Provider can only see the encrypted data and it won’t know if you are using Tor. However, Tor over VPN does not guarantee the protection of your data. Volunteers make the Tor the nodes. So there is no surety that they will strictly follow the rules. The data is decrypted at the exit node before reaching its destination. Hence there are chances the data can be stolen or there might be destructive changes made to it. Sometimes the exit nodes are blocked by certain sites that are doubtful about Tor. In such cases, nothing can be done by Tor over VPN. VPN over Tor There is another method which is called VPN over Tor. The data first passes through the Tor network and then through a VPN. This hides the VPN IP address and also protects your data from the exit nodes in Tor. The disadvantage is that your Service Provider will know if you are using Tor. This makes it a less used method. We can conclude that VPN does matter in the deep web. Without VPN your real identity would be exposed and all your actions can be tracked online. This can lead to unfavorable situations for you. Therefore, it is important to keep your data and online activities safe on the deep web so that your data could not be used against you. Both of the methods VPN over Tor or Tor over VPN can be equally beneficial. It is better than using a VPN alone which is not secure enough.
<urn:uuid:ac441ce8-d9dd-4e55-af26-e2837ea6453e>
CC-MAIN-2022-40
https://internet-access-guide.com/do-vpns-matter-on-the-deep-web/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00549.warc.gz
en
0.933319
1,238
3.0625
3
If you pay attention to the latest cybersecurity news, you may have heard that something called cryptojacking is quickly taking the hacker world by storm as the newest cyber threat, possibly becoming even more popular than ransomware. So what on the earth is cryptojacking? Cryptojacking is a method of hijacking computers to mine cryptocurrency without the victim’s knowledge or permission. If you are not familiar with the world of cryptocurrencies, the act of mining simply means performing complex calculations to add them to the blockchain (Another term?! The blockchain is the distributed ledger of recorded transactions for the cryptocurrency). For instance, the popular Bitcoin cryptocurrency says that there will only ever be 21 million Bitcoins in existence but not all of them have been created yet. Bitcoin mining essentially is creating new Bitcoins and bringing them to light. But back to cryptojacking…hackers are essentially stealing the processing power of victim’s computers to run the complex calculations to be awarded with new cryptocurrency. They do this by infecting website plugins and stealing your processing power while you visit legitimate websites, they do it while you are connected to the Wifi at your coffee shop, and they also get you through malware that steals your processing power all the time. So why should I care about cryptocurrency mining malware? More often than not, you may not even realize that you have been infected with cryptocurrency mining malware. You may experience a slow-down of your computers or lag while using the internet. The same goes with your mobile devices as cryptojacking has started exploiting the processing power of Android phones through malicious websites. There even was a nasty version of Android cryptojacking malware called Loapi that could cause the phone to use so much processing power that the phone would physically melt. Other than melting your phone, there are other cases when cryptocurrency mining malware could cause real havoc. In a race to find more processing power, hackers have looked to utilities and have successfully infiltrated a water utility in the United Kingdom to mine cryptocurrency. If the cryptocurrency mining operation would have consumed enough processing power, it could have caused system failures and truly impacted the operations of the utility. Perhaps even more stunning is that a handful of scientists in Russia have been arrested when they attempted to connect a supercomputer at a nuclear facility to the internet so they could use the computer’s processor power to mine cryptocurrency. How to prevent cryptojacking? There are a couple of steps that you can take to prevent cryptocurrency malware infections. - Install an anti-cryptocurrency browser extension like NoCoin or MinerBlock - Use a pop-up/ad blocker (some even have cryptocurrency blocking built in)
<urn:uuid:e8913360-3fdb-4bff-a613-5675b847c233>
CC-MAIN-2022-40
https://axiomcyber.com/tag/cryptocurrency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00549.warc.gz
en
0.947832
536
2.578125
3
Cybersecurity Tactics to Prevent Ransomware Ransomware is a type of malware that prevents or limits users from accessing their system, either by locking the system’s screen or by locking the users’ files unless a ransom is paid. More modern ransomware families, collectively categorized as crypto-ransomware, encrypt certain file types on infected systems and forces users to pay the ransom through certain online payment methods to get a decrypt key. When ransomware created a niche for itself in the cyberthreat landscape, it was categorized as scareware, similar to FAKEAVs that used fake scan results to frighten victims into paying for bogus antivirus software. Slowly, more successful ransomware families progressed into something more threatening as they began to encrypt files. But even these tactics lost their edge over time. Organizations adapted. People realized that simple security measures like regular backup practices can significantly reduce a ransomware attack’s damage. In addition, ransomware actors had a “shotgun” approach (indiscriminately distributing their malware) that allowed antivirus providers to develop solutions to defend against them. In a sense, ransomware notoriety diminished from a hazard to a nuisance. Unfortunately, this was only temporary. 2020 seemed conducive to the development of new ransomware to a narrowing range of targets. Overall, there was an increase of new ransomware families, from 95 in 2019 to 127 in 2020, despite the decreased detection of ransomware-related components. Cybersecurity Recommendations Against Ransomware Like with any cyberthreat, prevention is still key. Organizations should be knowledgeable of the current techniques and components being used by ransomware operators. Taking note of the four stages of a ransomware attack can help in identifying signs of intrusion such as phishing emails and possible openings like unpatched vulnerabilities. Suggested cybersecurity recommendations include: - Regularly back up files. Despite current developments, backups still provide a significant safeguard against ransomware encryption and other cyberthreats. - Limit access to shared/network drives and turn off file sharing. This minimizes the chances of spreading a ransomware infection to other devices. Employ secure authentication strategies. Enable options such as multi-factor authentication to deny threat actors access to accounts. More about this topic with our cyber-readiness guide. This article was written by Magno Logan, Erika Mendoza, Ryan Maglaque, and Nikko Tamaña
<urn:uuid:d20d42c6-d37c-4836-9187-508f5381778e>
CC-MAIN-2022-40
https://www.iiot-world.com/ics-security/cybersecurity/cybersecurity-tactics-to-prevent-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00549.warc.gz
en
0.931953
483
3.09375
3
Almost every day it seems there is a new revelation about how our personal data is being mishandled by those to whom we trust it. Recently, we learned how Facebook had given device makers access (opens in new tab) to the private data of millions of users, and then “failed to police” how its partners used that data. That follows news of the combination of bugs in several of its services that left the accounts of more than 30 million users exposed (opens in new tab) to hackers for over a year. Nor is it just Facebook. We also recently learned that a flaw at Google+ (opens in new tab) had left user data exposed since 2015. This followed the leaks at Equifax, Uber, Target, Yahoo and the rest. The list of serious data breaches appears endless at this point. While these are major betrayals of our trust, it would be a mistake to place the blame solely at the doors of these companies. The root of the problem lies rather with the outdated technologies we use to manage personal data and identity online. The good news is that there is an alternative in the works. It’s known as "self-sovereign identity" and it's a big deal. A great leap forward for identity Really a collection of technologies and methods, self-sovereign identity is the next step in the evolution of digital identity and personal data management. It builds on advances in hardware and software that make it possible for users to control their own online identities and personal data, removing the need to trust someone else to do it for them. These advances include new generation smartphones powerful enough to serve as personal identity platforms; new techniques in cryptography – in particular the invention of the blockchain – that make user-controlled identity and data feasible; and new standards, including decentralised identities (DIDs), to help make this work at scale. With SSI we can build the digital equivalents of the physical proofs of identity, like our driver’s licenses or passports, that we keep in our wallets or desk drawers today. The main differences are: a) they are completely owned by the individual, residing in digital "wallets" or folders on our phone or PC, b) they are far harder to forge than their analogue counterparts, and c) they are more flexible and easier to use. In the self-sovereign identity paradigm, when it comes time to authenticate ourselves or provide data online, instead of creating a new account or asking Facebook Connect to log us in, we simply present our appropriate digital credentials. The site can easily check if these credentials are valid, and we can then grant access to our data. This has a number of benefits. With self-sovereign identity we can make our personal data more secure. In future we won't have to entrust the keys to our digital lives to a large corporation like Facebook, Google or anyone else. We can also better protect our privacy by being able to grant and revoke access to our data at will, and having far more choice as to how much information we disclose for a given transaction. For example, to buy wine online we only have to prove to the merchant that we are old enough to purchase alcohol, not what our actual birthday is; self-sovereign identity would make this possible to do. By ensuring businesses are only able to access information relevant to a given transaction, self-sovereign identity could also mitigate the potential damage caused by a data breach, or by the behaviour of an aggressive actor. As well as being safer and more private, this technology should also make using the Internet more convenient. Self-sovereign identity means we won’t have to set up a new account and password for every new site we visit, and may one day lead to a “passwordless Internet”. It also makes it much easier to update our data across all our sites. Gone will be the days of having to reenter credit card or address information on all our websites every time we open a new account or move house: we can simply make a one-time update of our digital ID, and the job is done. With self-sovereign identity it will also be easy to make verified "ID cards" for all sorts of things, both those we are used to and those we are not. From proof of address, age or citizenship to membership in a local club, almost anyone will be able to issue a bona fide credential that we could then use at will. This could open a whole new world of applications for digital identities and personal data, with intriguing possibilities. On the brink While self-sovereign identity is still in its early days, it is more than just theory. Working solutions already exist, such as the city of Zug in Switzerland that offered Ethereum blockchain based digital identities as an option to residents, or the Swiss national railway, which recently piloted the use of self-sovereign identity in tracking the credentials (opens in new tab) of its contractors and employees doing work on the tracks. That said, most people downloading a digital identity wallet today would find its uses limited. This is set to change. We think self-sovereign identity is a technology on the brink of mass adoption, and represents one of the most important developments in the ongoing efforts to improve the World Wide Web. For anyone who has had their data exposed online – which likely means almost everyone at this point – this should come as good news. Tom Lyons, Executive Director, ConsenSys Research and Advisory, Switzerland Rouven Heck, Co-Founder and the Project Lead, Digital Identity Platform (uPort), ConsenSys (opens in new tab)
<urn:uuid:1e22c607-e0a7-4a8b-9b6b-a3c86713e7c1>
CC-MAIN-2022-40
https://www.itproportal.com/features/hacks-bugs-and-breaches-everywhere-here-is-how-we-can-take-back-control-of-our-personal-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00549.warc.gz
en
0.951602
1,175
2.84375
3
Using the Palette BPEL Activities An activity is a step in a BPEL process and performs the process logic. The following are brief descriptions of the activities used in the Process Designer. Click the specific activity for more detailed information, such as their associated properties and instructions on how to use them. In order for the debug process to work effectively, you must assign an unique name for each BPEL activity in your process. For more detailed information see the BPEL specification. |Empty||An activity that does nothing when it executes.| |Invoke||The Invoke activity directs a Web service to perform an operation.| |Receive||The Receive activity receives the process input and initiates the process execution. The activity is complete when the message arrives.| |Reply||The Reply activity sends a message in response to a message that was received through a Receive activity. The Reply activity matches a Receive. Use a Reply to send a synchronous response to a Receive.| |Assign||You can use the Assign activity to assign values to variables. It can copy data from one variable to another, as well as copy, construct and insert new data using expressions or literal values.| These activities are containers which can enclose subsets of other basic activities. |If||The If activity performs conditional execution based on whether or not one or more conditions are met.| |Pick||The Pick activity waits for one of several possible messages to arrive or for a time-out to occur. When one of these triggers occurs, the associated child activity is performed. When the child activity completes then the Pick activity completes.| |While||The While activity executes an activity repeatedly until its condition evaluates to false. The While condition is evaluated before the execution takes place, in contrast to Repeat Until, where the condition is evaluated after the execution takes place.| |For Each||The For Each activity contains a Scope activity and executes it for a specified count. You determine the number of iterations that will execute by defining expressions for a start and final value. These values are inclusive, so a start of one and a final of 10 causes the enclosed scope to execute 10 times. The execution iterations can occur either in parallel or in sequence.| |Repeat Until||The Repeat Until activity executes an activity repeatedly until its condition evaluates to true. The While condition is evaluated before the execution takes place, in contrast to Repeat Until, where the condition is evaluated after the execution takes place.| |Wait||The Wait activity tells the business process to wait for a specified amount of time or until a deadline is reached. Exactly one of the expiration criteria must be specified.| |Sequence||The Sequence activity arranges and executes a collection of activities sequentially in an ordered list. In a Sequence, the first activity in a sequence executes, and when it finishes, the second activity begins.| |Scope||The Scope activity provides a container where you can enclose a subset of activities, creating the structure and conditions to have them execute as a manageable unit. A Scope activity can contain fault, event, and compensation handling for the activities nested within it, and also have a set of defined variables and a correlation set. A unit of work enclosed and executed within a Scope can be reversed. For example, if a customer cancels a paid order, the money must be returned and the order cancelled without affecting other orders.| |Flow||The Flow activity executes all activities concurrently. This means that you can define two or more activities to start at the same time. The activities start when the Flow starts and the Flow finishes when the activities contained within in complete. You can use links within a Flow to define dependencies between activities and control the order in which they are executed.| |Exit||The Exit activity stops an executable business process immediately.| |Throw||The Throw activity generates a fault from inside the business process. Specifies a standard or custom fault. To learn more about error handling, see How do I Handle Faults?| |Rethrow||The Rethrow activity passes to the parent Scope the fault that was originally caught by the immediately enclosing fault handler. You can only use the rethrow activity within a fault handler catch and catch all elements.| |Compensate||The Compensate activity starts compensation on all inner Scopes that have already completed successfully, in default order. Only use this activity from within a fault handler, another compensation handler, or a termination handler.| |CompensateScope||The CompensateScope activity starts compensation on a specified inner Scope that has already completed successfully. Only use this activity from within a fault handler, another compensation handler, or a termination handler.| These activities are not standard BPEL activities, but extensions of those activities that provide additional functionality to your process. |RESTful||The RESTful activity provides the ability to invoke RESTful services within your process.| |The Email activity provides the ability to send and receive email notifications within your process.| |Log||The Log activity provides the ability to log information, errors, and warnings within your process.|
<urn:uuid:dc2a206a-c9b0-482c-886c-13df9c8cf77d>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/verastream-process-designer/r6-sp2/user-guide/how-to-use-process-design-studio/using-the-palette-bpel-activities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00549.warc.gz
en
0.899575
1,105
2.828125
3
The time between diagnosis and the institution of symptomatic treatment is critical in the effort to find a cure for Parkinson’s Disease (PD). A paper published in Nature Partner Journal: Parkinson’s Disease notes too many early PD patients wait too long before seeking medical attention, or start taking symptomatic medications before they are required, thereby dramatically shrinking the pool of candidates for clinical trials. Parkinson’s disease is a disorder of the central nervous system that affects movement. Symptoms include tremors, stiffness, and slow and small movement. The pace of progression varies among patients, making the months following diagnosis crucial to researchers studying the disease’s progression. “The critical time of about one year from when the patient can be diagnosed with early PD based on mild classic motor features until they truly require symptomatic therapy can be considered the Golden Year,” said lead author Robert A. Hauser, MD, director of the Parkinson’s & Movement Disorder Center at the University of South Florida. “It is during this early, untreated phase, that progression of clinical symptoms reflects the progression of the underlying disease.” Hauser says that in order to determine whether or not a potential disease slowing therapy is actually working, they must be able to compare the therapy to a placebo without interference from symptomatic treatment. Otherwise, they won’t know if the therapy is slowing the disease’s progression or if they are just seeing the effects of symptomatic treatment. This requires patients to seek assessment soon after they notice the onset of tremor or slow movement. In addition, physicians should consider referring patients to clinical trials soon after diagnosis and delay prescribing symptomatic medication until it’s necessary. If a patient waits until symptomatic treatment is necessary, the opportunity to participate in these crucial clinical trials is lost. More information: Robert A. Hauser. Help cure Parkinson’s disease: please don’t waste the Golden Year, npj Parkinson’s Disease (2018). DOI: 10.1038/s41531-018-0065-1 Provided by: University of South Florida search and more info website
<urn:uuid:b5073dc0-543c-4f4a-995b-fdadf5cc9d19>
CC-MAIN-2022-40
https://debuglies.com/2018/09/27/the-time-between-diagnosis-and-the-institution-of-symptomatic-treatment-is-critical-in-the-effort-to-find-a-cure-for-parkinsons-disease-pd/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00749.warc.gz
en
0.8931
447
2.828125
3
"Damballa FirstAlert will discover cyber threats long before traditional preventative security solutions will have the signatures or blacklists they would need to detect the threat," the company says. Damballa FirstAlert was the cyber threat intelligence system behind the discovery of the IMDDOS botnet that Damballa announced on September 13, 2010. In additional to real-world trials of the new inventions, Damballa Labs discovered multiple botnets in the early stages of their mass infection lifecycles. "These botnets were taken down as a matter of course," Damballa says. "In all cases, the botnets were discovered weeks before the malware was first detected through traditional approaches [on average 30 days]." Damballa FirstAlert is the cyber threat intelligence system that powers the Damballa Failsafe (for enterprise networks) and Damballa'CSP (for communications service providers). With Damballa FirstAlert, Damballa customers will be able to detect and terminate threats in the early stages of their infection lifecycle and long before traditional prevention systems would identify the infection or breach, the company says. "The introduction of these new inventions comes at a time when customers are acutely aware of the enormous damage a network security breach can cause," says Val Rahmani, CEO of Damballa. "Any enterprise, ISP or telco network protected by Damballa products will detect and block cyber attacks weeks and possibly months before any malware-dependant solutions will ever be aware of the threat." The two new inventions, Kopis and Notos, are both Damballa patent-pending technology. Kopis is an early warning threat discovery system that monitors domain look-up behaviors across autonomous networks, uniquely capable of operating at different levels of the Internet hierarchy. The Kopis research paper will first appear in the August 2011 proceedings of the 20th USENIX Security Symposium. Notos is a dynamic reputation system for DNS, which operates by utilizing the massive historical DNS data aggregated in the Damballa Labs. It assigns DNS reputation scores to new, previously unseen domains. The Notos research paper appeared last year in the proceedings of the 19th USENIX Security Symposium. "Just as DNS is a critical component of the Internet's functionality, it is also the Achilles' heel of cybercriminals," says Gunter Ollmann, vice president of research at Damballa. Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.
<urn:uuid:35c603b1-a81f-45f3-9a2d-7fc3183a3d9d>
CC-MAIN-2022-40
https://www.darkreading.com/perimeter/product-watch-damballa-rolls-out-early-detection-service
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00749.warc.gz
en
0.929387
529
2.53125
3
Slow internet can be so annoying, especially when you are working from home. Sometimes, you may detect these problems by yourself and fix them, or you may need help from IT services near you to get it done. Many factors can make your internet slow. Here we’ve compiled a list of possible causes for your slow speeds. A computer virus can cause permanent damage to your device and its network speeds. When a computer is infected with a virus, it installs a foreign code in the computer. This code then spreads by sending copies of itself via email. Some viruses rapidly multiply themselves by sending out hundreds of email messages per minute. They leave the computer with weak computing power and a poor Internet connection. It is difficult to tell when viruses are active. So, it’s good to leave your antivirus software running at all times. Browser add-ons can be another cause for slow internet. Browser add-ons are programs displayed on your browser’s toolbar, such as multimedia add-ons, search bars, or other programs. Many browser add-ons can enhance your surfing experience by allowing you to view multimedia or specialized documents. However, add-ons can make your internet connection slow down. If you think add-ons are slowing down your browser, try opening it in add-ons disabled mode. You can only disable add-ons for the session. If your performance improves, you can consider turning them off permanently using the add-on manager. To activate the add-on manager on your web browser, click the tools icon and then manage add-ons. A virtual private network (VPN) is software that encrypts data transmissions between your device and servers while also masking your IP address. You have the option of paying for a VPN or using the free service. The paid options are generally faster. However, since you are using a relay for traffic, they can still slow down your internet. It can also be slow if you use it during peak hours when there is congestion. To fix this problem, try different location options that your VPN offers. Not all VPNs are equal, and the speeds available might vary significantly. Be careful when using free VPNs since you might have to sacrifice your data, security, or speed. If you’re experiencing slow internet speeds, someone else may be using your internet account. Most routers come with a default password, and it is advisable to change it. Ensure the password is a combination of complex characters by mixing numbers and letters. If you change it to something weak or leave the hotspot open, others can access your network without your permission. This increases traffic in your home network, which ultimately leads to slower internet speeds. Wi-Fi channels facilitate sending and receiving of data over the internet. If you have too many connections, the transmission will slow down. However, you may switch to less congested channels depending on the type of router you are using. To analyze your Wi-Fi channels, you can use Android or iOS apps. These apps will help you identify the devices connected to your network. Lack Of Enough Horsepower Determining the amount of speed you need in a business is essential. This depends on how many people are using the internet and what they are using it for. They may be using it to stream music or browsing, and they all require different amounts of speed. Dealing with slow internet speeds can be so frustrating since it limits your productivity and efficiency. However, first you need to determine what is making the internet slow down so you can know how to solve it. You can also seek professional help for your business from an IT Company.
<urn:uuid:a9841f29-9e4d-451a-acdb-9f7c2f686ba9>
CC-MAIN-2022-40
https://freshmanagedit.com/managed-services-provider/this-is-why-your-network-is-so-slow/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00749.warc.gz
en
0.926143
777
2.5625
3
Both VPN service and Tor technology were developed for people looking for better online privacy, but it’s quite rare that people know the difference between the two. Both shield your true IP address and give you a new one that they claim cannot be traced back to your location. Before using one or the other to secure yourself while working from home, a situation much more common these days in these times of uncertainty created by COVID-19, take a look at what each technology may offer or lack. Most of us have seen commercials all over the internet about VPNs, which is an acronym for virtual private networks. All of those ads look great, especially since most of them are affordable ways to make more secure this time period when you are working from home. But they have a few pitfalls that should be known before you choose that a VPN will be your form of security. A VPN reroutes your information through one relay – usually, a network owned by the VPN or rented space on another company’s network. A VPN essentially replaces your IP address when you are online with one from the VPN provider’s servers. In this process, the information you send over the internet is encrypted, but what many don’t understand is that when the information leaves the server of the VPN provider, it’s not encrypted. For encryption to be effective, it needs to be supported on both ends, so this renders a VPN as generally ineffective, strictly speaking, at securing your information. More than that, VPN providers are not as secure as you may think. There are laws that VPN providers have to abide by if they are located in a place that is governed by the Five Eyes Surveillance Alliance. The law in these locations doesn’t insist on protecting the anonymity of the VPN customer if the government subpoenas the VPN provider for their logs and records of the customer. This situation may make you question what it is that is actually secure and anonymous when using a VPN. However, there’s another option when it comes to securing yourself more when you are working on the internet; it’s called Tor. Tor is a free, not-for-profit, software that secures your information more than a VPN does. Instead of sending your information to one server to change your IP address, Tor will encrypt your information and reroute it through three relays that are operated independently and chosen at random. What makes this more secure is that each relay does not know the full traffic route, only the entry and exit point their respective segment. Each relay is different and is changed every ten minutes, so they are incredibly difficult to track. Unlike VPNs, Tor has end-to-end encryption which means that the information that is sent is encrypted each time that it’s sent to a different relay and to the end-user which certainly makes it more secure than a VPN in terms of full-route encryption. So, before you decide to purchase that VPN that you saw in a commercial while you are working from home these days, rethink your decision based on the information in this article. There might be something better out there that is actually cost-free. LibertyID is the leader in identity theft restoration, having restored the identities of tens of thousands of individuals without fail. If you retain personal information on your customers, now it is the time to get data breach planning and a response program in place with our LibertyID for Small Business data breach preparation program. With LibertyID Enterprise you can now add value to existing products, services, or relationships by covering your customers, employees, or members with LibertyID’s fully managed identity theft restoration service—at a fraction of our retail price—with no enrollment and no file sharing. We have no direct communication with your group members–until they need us. Call us now for a no-obligation proposal at 844-411-LIBERTY (844-411-5423).
<urn:uuid:b78b786d-71d8-45eb-9365-fcaef90b7661>
CC-MAIN-2022-40
https://www.libertyid.com/blog/vpn-or-tor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00749.warc.gz
en
0.959622
811
2.5625
3
GoCertify Honors John Lewis on Martin Luther King Jr. Day Today is a holiday in the United States that honors Martin Luther King Jr., the civil rights leader who dedicated his adult years to ending racial inequality. Dr. King focused his work on the racial divide in the United States, but his wise words have universal application. You don't have to be an American to appreciate the profound wisdom of a statement like, "The time is always right to do what is right." Martin Luther King Jr. died April 4, 1968 at the tragically young age of 39, shot by an assassin at the Lorraine Motel in Memphis, Tenn. The federal holiday that commemorates his life and legacy was created during President Ronald Reagan's first term in office and formally assigned to the third Monday in January in 1992 under President George H.W. Bush. As we do on most holidays, we've prepared a thematically appropriate quiz. Past Martin Luther King Jr. Day quizzes have focused on Dr. King himself. Today, however, we're broadening our view to honor the legacy of the late John Lewis, a colleague and supporter of King's who passed away last year at age 80. Lewis was just 15 the first time that he heard King speak, and only 18 the first time that he met him in person. At age 21, he became one of the original Freedom Riders, helping to carry out a historic bus tour of segregated U.S. states. A Civil Rights giant, Lewis adhered to and added upon King's vision and legacy for the rest of his life. NOTE: To view last year's quiz, click here. 1) An aspiring preacher from boyhood on, a young John Lewis initally honed his oratory by preaching to what congregation? 2) Which institution of higher learning in Alabama ignored Lewis' application for admission, an outcome that inspired Lewis to write a letter to Martin Luther King Jr.? 3) What is the four-word phrase from his student nonviolent organizing years that Lewis used to encourage dissent and political engagement throughout his life? 4) As one of the 13 original Freedom Riders (seven blacks and six whites), Lewis achieved what unwelcome first in Rock Hill, South Carolina? 5) How many times had Lewis been arrested before assuming leadership of the Student Nonviolent Coordinating Committee (SNCC) in 1963? 6) What role did Lewis play in the "Bloody Sunday" march across the Edmund Pettus Bridge in Selma, Ala.? 7) What year did Lewis first win election to the U.S. House of Representatives? 8) What legislation did Lewis introduce in 1988, one year after taking office for the first time, that stalled out in the Senate for the next 15 years? 9) Which Democratic candidate for president did John Lewis endorse in the 2008 U.S. presidential election? 10) What work resulted in Lewis' winning a National Book Award in 2016? (See answers on next page.) 1) Chickens. As a small boy, Lewis cared for a flock of chickens, who were the first of God's creatures to hear him preach. 2) Troy State College (now Troy University). King discussed with Lewis the possibility of suing Troy State to get Lewis admitted, but warned that violent consequences could result. Lewis ultimately attended two schools, American Baptist Theological Institute and Fisk University, in Nashville, Tenn. 3) Lewis often encouraged hearers to become involved in "good trouble, necessary trouble" to advance progress toward racial equality and social justice. 4) Lewis was the first of the Freedom Riders to be physically assaulted. He was attacked when he and two of the other riders attempted to enter a "whites only" waiting room at an interstate bus terminal in Rock Hill. 5) 24. Lewis led the SNCC from 1963 to 1966. 6) Lewis and fellow activist Hosea Williams led more than 600 marchers across the bridge. Alabama State Troopers waiting on the far side of the bridge attempted to disperse the marchers using tear gas and night sticks. Lewis' skull was fractured during the resulting chaos. 7) 1986. Lewis would go on to represent Georgia's 5th congressional district for the next 34 years, winning re-election 16 times. 8) Lewis sponsored a bill to create a national black history museum in Washington, D.C., which he put forward each year until 2003, when it was finally signed into law following the retirement of Sen. Jesse Helms of North Carolina, a consistent critic of the proposed legislation. The National Museum of African American History and Culture was formally opened Sept. 25, 2016. 9) Barack Obama. After initially endorsing Hillary Clinton on Oct. 12, 2007, Lewis formally switched his endorsement to Obama on Feb. 27, 2008. After Obama took his oath of office, Lewis asked the new president to sign a commemorative photo. Obama wrote: "Because of you, John. Barack Obama." 10) Lewis won the 2016 National Book Award in Young People's Literature for March a three-volume black-and-white graphic novel about his own experiences in the Civil Rights movement. Lewis and Andrew Aydin wrote the text, with illustrations by Nate Powell.
<urn:uuid:dd0d4467-bc1b-45e9-bb61-323b786960b6>
CC-MAIN-2022-40
https://www.gocertify.com/articles/gocertify-honors-john-lewis-on-martin-luther-king-jr-day
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00749.warc.gz
en
0.959503
1,073
2.984375
3
London start-up Gravity has created a tablet app that allows the user to draw in 3-D. And while 3D modeling is certainly not new, Gravity Sketch’s claim to fame is the fact that it’s, according to Wired.com, incredibly easy to use: Draw a circle by dragging your finger across the screen, and it appears as a sphere. Draw a square, and it appears as a cube. Built-in tools then let users apply different finishes, and snap everything to an aligned grid before exporting the OBJ file to the 3-D software of their choice. The learning curve required in CAD use, for example, is basically eliminated here, as Gravity Sketch uses touch and gesture versus menus, icons and engineering jargon. The implications are clear, and Wired says the idea that anyone, not just professionals, can sketch a 3-D object with scant technical knowledge will grow increasingly important. And it’s just another element to consider as we assess just how fast 3D printing will create game-changing impacts on both the industrial and consumer markets. I’m Anna Wells and this is IEN Now.
<urn:uuid:acb3cdcb-164c-4cd9-bd21-bf68586cba89>
CC-MAIN-2022-40
https://www.mbtmag.com/home/video/21101579/3d-designs-at-your-fingertips
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00749.warc.gz
en
0.930247
237
2.703125
3
Tank level sensors come in all shapes and sizes. For this instance, I shall focus on those designed to provide monitoring in wet situations such as in a fuel tank or a water storage tank. Without a way to monitor the internal contents of a tank, you may be left in the dark when a power outage occurs and your generator is out of gas because no one noticed the leak that drained your reserve tanks. These devices may first appear to be some mythical device from another planet, but upon further investigation, they are extremely simple and utilize only one moving part to open and close a circuit. In order for a sensor to function as it should, a dry reed switch is encapsulated inside a down-stem. A dry reed switch is a hollow device with two contacts that in their inactive position remain apart, but when a magnetic field passes close enough, they pull together completing a circuit. When it comes to tank level sensors, there is typically a float around the stem housing the switch. This float typically contains a magnet so when liquid levels rise and fall causing the float to move, the magnet is passed over the switch. For monitoring purposes, these switches can be hooked up in many ways. Within a circuit to trigger a low-fuel indicator, to turn on a pump, or to signal an alarm for a technician to attend to the problem as achieved within remote site monitoring solutions where on-site personnel may not be available. These switches can provide that extra piece of mind, allowing you to keep a constant eye on whether or not your fluid storage devices are performing their duties. You need to see DPS gear in action. Get a live demo with our engineers. Download our free Monitoring Fundamentals Tutorial. An introduction to Monitoring Fundamentals strictly from the perspective of telecom network alarm management. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:24060f11-4351-4601-aa40-7f37088b8251>
CC-MAIN-2022-40
https://www.dpstele.com/network-monitoring/fuel/tank-level-sensor.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00749.warc.gz
en
0.919247
437
2.515625
3
Women’s equal participation and leadership in government are essential to achieving diversity inclusion in the public sphere, but women remain underrepresented at various levels of government worldwide. Female leaders in government discussed the best practices for women to navigate the career ladder in government, and overcome the barriers placed in their way during a GovLoop webinar on July 13. Diversity inclusion is often perceived to be about perspective, representation, challenging conversation, and supporting inclusion. Therefore, to promote diversity inclusion in the workplace – specifically in government – leaders must move out of their inner circles to provide opportunities to other employees within their agencies. “This is the time of the woman. Diversity Inclusion and equity are in the forefront of everyone’s mind across the board in the government and the private sector, and there are so many opportunities for women moving forward,” said Caronell LeMalle Diew, program manager at the Federal Aviation Administration. However, when leaders do not move out of their inner circles, this becomes a barrier for women because the opportunities available to climb the career ladder are limited. “Leaders need to identify their own biases and create opportunities for those outside their inner circle. This has to be a continuous conscious effort on the part of leaders,” said Sandra Auchter, director at the Denver office of the National Geospatial-Intelligence Agency. In addition, panelists agreed that a ubiquitous challenge that women face when entering the workforce is “imposter syndrome,” which is prevalent mainly among high-achieving women and is loosely defined as doubting your abilities and feeling like a fraud. “The majority of women in my business that are in leadership positions have had imposter syndrome at some point in their career. I got to remind women to remember that they belong here,” said Auchter. Imposter syndrome also holds women back from applying to specific government jobs or accepting promotions because they believe they do not meet professional qualifications. And according to Arianne Gallagher, director at the Office of Presidential Fellowships at OPM’s Center for Leadership Development, women need to be vulnerable and let down those barriers in these instances. “The worst that can happen in these instances is you do not get the job. But you cannot let that fear of failure prevent you from attempting different opportunities,” said Gallagher. “But remember always to emphasize your strengths and show that you are adaptable and can learn new skills.” Additionally, women early in their career need to take a step back and determine the requirements that need to be filled in an agency and serve them, whether it be anything from asking questions to clarify tasks for the team to understanding software and scheduling meetings. “This is a transferable trait wherever you go, and it will help build your presence and credibility within your agency, ultimately making you invaluable,” said Gallagher.
<urn:uuid:cde86695-1194-48da-8f86-859a9f4c6987>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/women-breaking-through-barriers-in-government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00749.warc.gz
en
0.960041
596
2.625
3
The Fast Lane whitepaper, Enabling Digital Business with Blockchain, introduced some basic blockchain concepts - transactions, blocks, ledgers and chaining - but didn’t expand upon how blocks are added to the ledger. A blockchain “consensus” is the agreement among possibly untrusted parties to add a block to the ledger (and consequently to the replicated ledgers). The consensus process is currently a major area of innovation and competition among blockchain developers. It might seem obvious, but consensus is only required when there is more than one node and there are transactions needing to be processed. Today, without blockchain, a bank can update the customer’s account without needing agreement from any other party, but it gets more complicated when two people who don’t know each other want to directly transfer something of value (funds or other objects) without the involvement of a trusted intermediary. No standard mechanism currently exists for blockchain consensus protocols. Consensus mechanisms are often labelled as “proof of something” – the Proof of Work (PoW) approach used with Bitcoin is the best known so far. Other forms of consensus protocol include Proof of Stake, Delegated Proof of Stake, Proof of Activity, Proof of Authority, Proof of Approval, Proof of Elapsed Time, and Practical Byzantine Fault Tolerance (PBFT). Executing the Bitcoin PoW process is called “mining” and the participating nodes are “miners.” The basic steps in the process (some details are not included here) are: - A user submits a transaction (i.e., an instruction to transfer funds/coins) to a Bitcoin node via a wallet; new transactions are verified and broadcast to all other nodes. Multiple transactions can be submitted at approximately the same time anywhere in the network. - Each node collects the transactions it receives into a holding file, periodically assembles them into a block, and calculates a Merkle Root Hash. - The block header is produced by doing the work (i.e., the mining) needed to find a “nonce” that goes into the header. As was noted in the whitepaper, the nonce is a number that makes the hash of the header meet specific criteria – and this takes considerable processing to determine. The miner that completes this task first earns the right to have their block added to the chain (and gets paid a fee for being the winner). Miners can work independently or in groups. - When a miner finds a nonce, it completes the block and sends it to all other nodes. - Nodes accept the block only if all transactions in it are valid and not already spent. Nodes accept the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash. When looking for a blockchain solution, its important to compare your requirements to the consensus mechanism that is being used. Did you miss our whitepaper? Read "Enabling Digital Business with Blockchain" here>
<urn:uuid:9f1dc6c9-f0aa-496b-a8f8-2959a73b1529>
CC-MAIN-2022-40
https://web.fastlaneus.com/blog/blockchain-consensus-when-and-why
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00749.warc.gz
en
0.93851
625
2.828125
3
What is OpenAPI? Originally known as the Swagger Specification, the OpenAPI Specification (OAS) is a format that can be used to describe, produce, consume, and visualize RESTful web services. It is a specification standard for REST APIs that defines the structure and syntax. Most notably, it is programming language agnostic. This enables both computers and human users to identify and understand service capabilities without requiring additional documentation, accessing source code, or inspecting network traffic. The OAS removes the guesswork from calling services similar to the way that interface descriptions simplify lower-level programming. With OpenAPI, you can quickly discover how the API works. API specifications are typically written in YAML or JSON. An OpenAPI file describes an API in its entirety, including: - Endpoints: which are available (/users) and operations on each (GET /users, POST /users) - Authentication methods - Operation parameters for each operation (Input and output) OpenAPI vs Swagger OpenAPI was part of the Swagger framework until 2016 when it became a separate project. Now it is overseen by an open-source collaboration project of the Linux Foundation called the OpenAPI Initiative. Today, tools such as Swagger can generate documentation, code, and test cases for the OpenAPI specification based on interface files. To understand the Swagger vs OpenAPI difference, it's necessary to perceive the distinction between the specification, the tools that help to implement the specification, and the users. OpenAPI is what the specification itself is called. The OpenAPI specification was originally developed by Smartbear Software, who later donated the specification. Now, the OpenAPI Initiative fosters the development of the specification, in a collaborative partnership with over 30 organizations. Smartbear Software, which today leads the development of Swagger tools, is a member of the OpenAPI Initiative, as are Google, Microsoft, CapitalOne, IBM, and other organizations representing different parts of the tech world. Swagger is a widely used toolset for implementing the OpenAPI specification developed by Smartbear Software. There are a mix of free, open sourced, and commercial Swagger tools which users can deploy at different stages of the API lifecycle. There are also other OpenAPI tools available. Swagger tools are frequently still perceived as synonymous with the specification since they were developed by the same team in tandem with its creation. But the Swagger tools are not the only available tools for implementing the OpenAPI Specification. A full list of a wide variety of API documentation, design, management, testing, and monitoring tools and solutions that support the latest version of the OpenAPI specification is available on GitHub. Why use OpenAPI? There are several important reasons to use the OpenAPI specification. Below we explore some of the more notable reasons developers turn to the OpenAPI standard. Single point of truth An OpenAPI definition is machine-readable and serves as the single source of truth for the API. This allows the import of API definitions into clients for manual testing and ensures each piece of the system can be verified against the specification. In larger API deployments, running an API gateway in front of an API implementation allows users to compare incoming and outgoing traffic against the specification. It also allows users to assess third party APIs against OpenAPI definitions—all to reduce the risks of product malfunction. By using a single file to describe an API, including objects, endpoints, and path, users can convert either a server-side or client-side description into a specific implementation whose code is a product of the description. This reduces misalignment between the back-end-offered API and the client-consumed API. OpenAPI allows users to automatically generate API clients, to be updated with the latest client changes as needed. Generating API clients out of projects automatically enables them to be managed, integrated, and consumed just as any other third party dependency would be and avoids the need to program typical boilerplate for each project. Once the API is defined, users can build and generate client code for Android, iOS, and most other programming languages in seconds. Readable, updated documentation Both YAML and JSON formats are easy to read and modify, and the OpenAPI definition file can be described in both. This means no particular programming language or framework is needed to make or discuss changes. Neither out-of-date documentation nor missing documentation is acceptable. Because OpenAPI becomes a single point of truth, even API documentation kept solely for internal use is never outdated, as the entire API specification is in a single format which several tools including the documentation generator can automatically process. OpenAPI is the current de facto industry standard for API definition. This means that any user can access services from an API using this specification without any extra setup, out-of-the-box. OpenAPI is distinct from other specifications in that: - It delivers a language agnostic, standard interface for describing RESTful APIs - It is understandable to human readers and machine readable - OAS consumers, human or machine, do not need any additional documentation or access to source code or traffic to understand the capabilities of service under development This enables users to build reliable, adaptable, user-friendly APIs for clients and interact easily with remote services using minimal implementation logic. Who uses OpenAPI? Here are some of the industries that benefit tremendously from OpenAPI technology: OpenAPI technology allows businesses in the eCommerce sector to scale up services rapidly to meet consumer demand. The top eCommerce platforms in the world are favorable to open source designing, which means that implementing the design framework for most eCommerce APIs is typically simple for API designers who use OAS. APIs in the media industry such as Twitter often use the OpenAPI specification, because it greatly adds to APIs that disseminate information to the public rapidly and efficiently. Open government generally refers to providing open data through APIs to serve government functions in new ways, or providing open APIs that serve privileged government functions with the right consent. For example, governments might provide taxation information through an API to avoid needing to pay a specialist to prepare their taxes, or offer an open registry of births and deaths. The US government currently offers a catalog of open APIs here online. When to use OpenAPI Among the biggest pain points for API developers is inaccurate and outdated documentation. An API that is not well-documented is difficult or impossible to integrate, but many developers write documentation that is not comprehensive, or avoid writing documentation altogether. OpenAPI solves this problem by making it simple to generate documentation automatically that always matches the changing architecture of the API. It also enables the use of various automated tools, since API descriptions are machine readable. All of this means developers can spend more time coding and less time documenting. Documentation is the most visible OpenAPI output, but there are numerous other applications. Other OpenAPI examples include setting up testing such as continuous integration using the OpenAPI file that generated the documentation. OpenAPI best practices Adopt REST patterns Don't reinvent the wheel as you write descriptions for your endpoints; look to established conventions that developers are comfortable with and have experienced across many APIs. The easier it will be to integrate with your API the more similar it can be to something developers have seen before. HTTP APIs often use the Representational State Transfer (REST) architectural style. There are three major API criteria to watch for, although most users don’t adopt the patterns wholesale: - Organize the API around resources - Communicate actions using HTTP methods - Rely on standard HTTP status codes Create friendly descriptions Any API definition includes technical details, but should also include many human-readable description and summary fields throughout the definition. For example, endpoints, parameters, and responses within parameters all require a summary of HTTP methods. Blank or useless text in these fields hurts the quality of OpenAPI documentation for the API and obfuscates the context around the expected outputs and inputs and the purpose of each endpoint. In a RESTful API design approach, the API is organized around resources. Using nouns for those resources is a common naming tactic, and users should choose either singular or plural nouns and stick to it for that API—and all APIs, if feasible. Aim for consistency in naming fields, and consider punctuation and capitalization. Also, choose and stick with a naming convention. And while meaningless names may work for the API endpoints and the fields in the results and parameters don’t technically need human-readable labels, unwieldy and meaningless API calls are atypical in that it's difficult to remember their purpose long enough to copy them to code. The bottom line is giving users a seamless experience with the API. Predictable names allow developers to make tweaks to code easily, sometimes without reference to documentation. Continuous integration and delivery After OpenAPI definitions are created, ideally documentation is generated automatically. As part of their development workflow, many software teams use a continuous integration (CI) pipeline. This allows for automatic testing against changed code in real time. Similarly, teams can deploy the code automatically once they have accepted the changes, either to a staging environment for further testing, or, if the team has adopted continuous delivery (CD) practices, directly to production. How OpenAPI supports API security Yes. Noname Security is a Holistic API security solution that protects APIs from authorization issues, data leakage, misuse, abuse, and data corruption without network modifications or agents. Noname allows users to analyze APIs and user behavior, delivering accurate and actionable vulnerability detection and breach prevention. The system provides unparalleled discovery and classification capabilities, enabling deeper insight into APIs, API inventory, data, users, and third party interactions. Noname allows you to discover the misconfigurations so common to complex environments, such as configuration flaws, shadow IT and rogue APIs—before attackers exploit and abuse them. In fact, the system allows you to see how your APIs change before users can, to detect and correct flaws as they are created. Simple to connect and use, learn more about OpenAPI and API Security with Noname Security.
<urn:uuid:5e913a2d-209c-441a-90a6-0d1b830ab260>
CC-MAIN-2022-40
https://nonamesecurity.com/learn-what-is-openapi
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00749.warc.gz
en
0.905559
2,101
3.03125
3
Using your fingerprint to unlock a device was once only featured in films about the future. Nobody ever considered it would become a reality. Fast forward to 2018 and no longer do we rely on personal data and passwords to gain access. Instead biometrics, from fingerprints and eyes scans to voice and facial recognition, are being harnessed. But, what does this mean for software development? What are biometrics? The word biometrics is derived from the Greek words for life (bios) and measure (metrikos). Translated from Greek, biometrics means measuring life. Biometrics is the measurement and statistical analysis of a person’s unique biology – this encompasses both physical and behavioural traits. The purpose of biometric authentication is to accurately identify a person by an aspect of their unique biological makeup. Are there different types of biometrics? With technology constantly evolving, as are the different biometric options available. Some of the most prevalent forms include: - Fingerprint recognition - Retina scans - Facial recognition - Voice recognition - DNA analysis Artificial Intelligence and biometrics Biometric systems can be operated in two modes, identification and verification. Identification is when no specific identity is sought out, but whether the person in question belongs to a predefined set of known users. It is a yes or no situation. Verification systems, on the other hand, determine whether a person is who they say they are and the goal is for a specific identity. When it comes to biometrics and Artificial Intelligence (AI) technology, the two can fit hand in hand. Some AI technology is fitted with a level of biometric software. For example, Amazon’s Alexa is a voice-controlled virtual assistant. In this instance, the biometric system is set up to identify a human voice. In contrast, the iPhone X uses AI to recognise the face of a user. This means the biometric software implemented is used to verify the identity of the singular phone owner. The pitfall of biometrics With AI technology encompassing? biometric software now available to consumers readily, issues are arising in regard to identity theft and personal information getting into the wrong hands. It’s a catch 22 situation as whilst biometrics are being used for safety and security reasons, they open up data security and integrity issues of another kind. Before harnessing any form of biometric software it is best to consider: - Where the data will be stored - How the data is accessed or shared - How long data is kept - The security of the hardware Biometrics are undoubtedly an attractive identification and verification method for software users. Their high levels of reliability and consistency are positive aspects, but there are inevitable issues surrounding data privacy. Any company or organisation wishing to use biometrics should ensure a concrete data security plan is in place. BROWSE SIMILAR TOPICSDigital Transformation ASK A QUESTION Don’t have time to call? Send your enquiry to the Acora team and we’ll get back to you quickly.
<urn:uuid:35d8e778-8b1d-4b37-b708-2ed51c6b3559>
CC-MAIN-2022-40
https://acora.com/2018/08/02/what-does-biometrics-mean-for-software-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00149.warc.gz
en
0.920158
642
3
3
Guest blog post by David Balaban with Privacy PC Digital transformation (DT) is an innovative process that requires fundamental changes in industrial technologies, society, culture, the financial sector, and the principles of creating new products and services. In fact, this is not just a set of IT products and solutions to be deployed in companies, but a global revision of approaches and business strategies carried out with the help of information technology. Digital transformation is a transition period leading us to the next industrial revolution. Not all companies are ready for the new and rather stringent requirements that digital transformation imposes on them, namely, for a complete modernization of business methods, revision of internal business processes, and new types of relationships within the company. Moreover, top managers must be prepared for both the positive and negative consequences of digital transformation. Digital transformation is not just the automation and digitalization of production processes. It is the integration of conventional office and industrial technologies that we use on a daily basis, with completely new IT-specific areas (cloud computing, artificial intelligence, machine learning, IoT, etc.). Possible negative side effects of digital transformation The revolutionary changes that the DT brings to the business have generated certain problems for information security services. New vectors of information security threats have emerged and the range of vulnerabilities potentially leading to cyberattacks has expanded. The currently popular approach called DevOps is a subject of special concern to information security specialists. It fundamentally changes the relationship between software developers, system administrators, technical services, and end-users. I would also like to note that one of the major obstacles to the rapid implementation of digital transformation in companies is the old (legacy) technologies that have been serving production and office processes for many years. On the one hand, they cannot be replaced quickly (without stopping business processes). On the other hand, they do not fit well into digital transformation processes and carry multiple information security threats. Digital transformation cybersecurity problems 1. The opacity of cybersecurity events in the corporate infrastructure In large companies, several different types of cloud services are widely used. They are all equipped with their own cybersecurity tools and various internal services. However, there are still a lot of problems, both with the integration of such solutions and with the transparency and recording of all security incidents in such a complex IT structure. Moreover, digital transformation implies significant growth of both cloud solutions and the complexity of corporate infrastructure due to the introduction of IoT, blockchain, AI, etc. 2. Problems associated with the automating cybersecurity processes In most companies, and even in large companies, many information security processes remain non-automated. However, the employees of the cybersecurity departments of such companies are confident that the protection works against all possible attack vectors, both inside the perimeter and in the clouds, on mobile devices, web servers, etc. Firewalls, intrusion detection systems, and other security solutions still provide a certain level of security in certain areas and reduce the number of information security incidents. However, without developing a general strategy and security policy, there will certainly be cybersecurity problems in the future. 3. Integration of security solutions Most organizations have huge problems with the integration of various infosec solutions. There is no end-to-end visibility of all threats. The situation is also bad in terms of compliance and the requirements of regulators. 4. Flexible scaling Security experts have found that with many enterprises, a quarter of the corporate infrastructure remains unprotected. Even if the company has effective solutions that protect its IT infrastructure, then in general, this does not increase the overall level of security in the organization due to poor integration and scalability of these individual solutions. As the IT infrastructure grows, due to digital transformation, as well as due to the complexity of cyberattacks, there is a need for scalability of cybersecurity solutions. At the moment, the biggest problem for cybersecurity professionals is complex polymorphic cyberattacks, targeted cyberattacks (APT – advanced persistent threat), as well as the growing use of DevOps, which increases the risk of untimely discovery of new vulnerabilities. 5. Software updates Although we need to constantly update all software, there are still dangerous threats associated with updating software, since sometimes, along with “patches” and “updates,” malicious software can also be installed if your vendor was hacked. Building an effective security strategy for the digital transformation DT can be used both for positive changes in society and for causing threats to global stability and security. The so-called “cyber weapon” is an example here. In order to determine the security strategy of your business and public administration systems during the constantly growing instability, you need to understand what “security” is. The very concept of security is divided into three large groups: personal, public, and state. Personal security is a state when a person is protected from any type of violence (for example, psychological, physical, etc.) Public safety is the ability of social institutions to protect individuals and society from various types of threats (mainly internal). State security is a system for protecting the state from external and internal threats. Another important area of security is cybersecurity and information protection. The goal of cybersecurity specialists is to ensure its confidentiality, integrity, availability. These three key principles of cybersecurity are called the CIA Triad: - Confidentiality is the property of information to be closed to unauthorized persons. - Integrity is preserving the correctness and completeness of data. - Availability is the property of information to be available and ready for use at the request of an authorized person\resource. The main goal of cybersecurity (in the context of digital transformation) is to ensure the security of both data and IT infrastructure from accidental or deliberate threats that can cause unacceptable damage. Today SIEM (Security Information and Event Management) systems are gaining more and more popularity. Their main task is to monitor corporate systems and analyze security events in real-time, including AI and deep machine learning. Large technology companies that lead in the area of digital transformation are much more likely than others to integrate their products and information security tools into a single corporate security architecture. It should be noted that such companies give preference to a strategic approach and the formation of a security policy, which allows: - Quickly detect threats and promptly respond to them. - Provide high-quality protection of data assets. - Have a transparent technological environment for detecting threats. Leaders of digital transformation, as a rule, are more willing to automate cybersecurity processes in their companies. It is much more effective than manual monitoring of threats, which was used everywhere before the period of DT. A positive example of this automation and integrated approach is the implementation of the Security Operations Center (SOC). However, it should be borne in mind that setting up the automation of all work processes requires plenty of testing time and the need to attract competent specialists. One of the features of cybersecurity in the era of digital transformation is the process of introducing the means of centralized control of compliance issues (both industrial standards and IT and security standards.) It increases the efficiency of both security measures and compliance efforts. One of the major obstacles on the way to digital transformation is the need to ensure a high level of cybersecurity, which is not always possible to achieve by most companies, especially in the SMB sector. At the same time, it is necessary to take into account the growth factors of both internal and external cybersecurity threats associated with significant growth of the cybercrime sector, as well as risks arising naturally during the implementation of the DevOps, cloud technologies, IoT, etc. Below are best cybersecurity practices that I can recommend to companies during the digital transformation process: - Build a unified security architecture that will provide centralized IT infrastructure management and transparency of all information security events. - Develop the company’s security policy and strategy for protecting the corporate network. - Implement built-in controls to comply with standards and regulatory requirements. - Use methods of both preventive and proactive protection.
<urn:uuid:5301de34-9390-4856-9213-97eaaca465bd>
CC-MAIN-2022-40
https://hacknotice.com/2021/03/12/what-are-the-cybersecurity-risks-of-digital-transformation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00149.warc.gz
en
0.937239
1,632
2.640625
3
Security of telecommunication networks February 21, 2019 Telecommunication networks are an essential part of the day-to-day running of businesses and providing essential public services around the world. However, over the last decade concerns have been raised over the security of these networks and their ability to operate with disruption. This article looks at the main threats to telecommunication networks and the strategies that are in place to tackle these issues. Businesses, government agencies, health care and social interactivity of entire nations rely on core telecommunications services such as telephones and internet facilities which are delivered by network providers. There are a number of natural, non-deliberate and malicious threats posed to various areas of a telecommunication network, including fixed-line, mobile, undersea and satellites. What are the main threats to telecommunication networks? Malicious attacks to telecommunication networks include theft of network equipment, vandalism, espionage or terrorism. Cable Theft – Copper cabling is often stolen from non-secure parts of a network as it is seen as a high-value, easy target for thieves. As the cabling links are not normally secured, they are often easy to access and remove. According to an Openreach report, metal theft from across all sectors costs the UK economy alone £770 million per year. Cable Damage – As undersea cable network will include multiple cables which are routed through one geographic point. Deliberate targeting of these sections of the network would cause damage to cabling, potentially leading to service outages. Malicious attacks on undersea cables are rare, but it is possible for submarines or ships to intentionally drag their anchors along the sea-floor in an attempt to damage cable links. Signal Jamming – It is possible to disrupt mobile and satellite signals with jammers that transmit radio signals to interfere with them. Devices range from handheld jammers which can reach tens of meters, up to industrial scale tools that are able to disrupt signals at up to 750m away. System Failure – Hardware and software failures are common in telecommunication networks. Poor planning that results in telecommunications products not being maintained and that replacement parts aren’t readily available can be seen as a threat to critical telecom infrastructure. Power Failure – Power supply to networks are vital for the smooth operation. If power supply systems are not backed up and fail-safes are not put in place in case of power outage, entire sections of networks can drop out. Accidental Cable Damage – As described above, cable damage can cause significant disruption to networks, but it may be that the damage caused is accidental. It is important for operators to ensure that even when cabling is secure, the possibility for accidental damage is limited as much as possible. Poor Weather – Bad weather such as flooding, winds and hot or cold weather can cause disruption to telecommunication systems. Operators need to ensure that everything is done to ensure that their infrastructure is protected as much as possible. A lot of personal data is stored within telecommunication company databases, which had made them a target for cybercrime, particularly over the last decade. There are a number of cyber threats for telecommunication infrastructure that could not only leave data vulnerable but also potentially cause networks to fail altogether. Device compromise – Devices used across various areas of a telecommunications network (such as routers) are vulnerable to cyber-attacks. Hackers are able to launch attacks, often anonymously, to access services. An example of device vulnerability would be within the supply chain (something that has recently made headlines around the world when various governments raised concern over the security of Hauwei telecommunication products). Man-in-the-middle Attacks – Communications between two parties may be vulnerable to interception by a third party. The information could be recorded, collected and even altered by an attacker. Legacy Protocols – If products within a telecom network are still running legacy software that was not designed to withstand any or all of the above, then the equipment will be vulnerable to more modern, sophisticated technology. How to protect against security threats within a telecommunications network In order to limit the damage caused to telecommunication networks by flaws in security, operators can do a number of things: - Maximise the strength of physical network protection (secure property, reinforce cabling etc) - Join telecom security schemes designed to audit and monitor security threats and advise accordingly - Carefully consider the locations of network equipment and factor in weather risks when planning and deploying infrastructure (e.g. In regards to weather check flooding risks) - Keep stocks of spare parts available across the network to guarantee downtime is kept to a minimum when security flaws occur. Get all of our latest news sent to your inbox each month.
<urn:uuid:63a0bd3c-16d1-4a48-b4a4-ce800e394f19>
CC-MAIN-2022-40
https://www.carritech.com/news/telecommunication-network-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00149.warc.gz
en
0.950087
981
2.65625
3
Are you receiving corrupt files during FTP transfers? It might simply be ,=to an incorrect data type setting. In this post, we help you understand the nuances and differences between FTP binary and ASCII data types (transfer modes) so you can avoid these issues. Let's begin our discussion with an example of using binary vs. ASCII. Using FTP Binary Or Image Type Let's download an image file named firefox.jpg using the FTP GET command. Notice how the download proceeds without any issues. However, when you try to open the file, that's when you'll see the problem.> Here's what happens when we try to open the file using the Linux gThumb application. You can see it only shows the JPEG icon instead of the image itself. Now, here's what we see when we try to load the image using the Image Viewer. Clearly, there's something wrong with the file. Let's do that again. This time, let's issue the Binary command before executing the Get command. The actual command that's sent to the server is TYPE I, where I stands for Image. Image mode and Binary mode mean the same thing in FTP. This command tells the server that the transfer is going to involve a file with a binary data type and to prepare for a binary mode transfer. The download proceeds as before. But now, when we try to open the file using the gThumb application, we can now see the actual image. The same thing happens when we load the image file using the Image Viewer. This worked because an image file requires an image or binary transfer mode, which transfers files as is. The reason why we had a problem earlier was because I actually issued the ascii command (not shown in the screenshot) before downloading the file. This executes the TYPE A command, where A stands for ASCII, and sets the transfer mode to ASCII. More about ASCII shortly. Image files aren't the only files that should be transferred using image mode. Other files that need to be downloaded using the binary transfer type include: - Image files (e.g. .jpg, .bmp, .png) - Sound files (e.g. .mp3, .avi, .wma) - Video files (eg. .flv, .mkv, .mov, .mp4) - Archive files (e.g. .zip, .rar, .tar) - Other files (e.g. .exe, .doc, .xls, .pdf, etc.) Most popular FTP clients (the BSD command line client included) already use the binary or image type by default. So there's usually no need to issue the binary command if you download an image file. So why would you need the ASCII transfer type? When To Use FTP ASCII Transfer Mode ASCII data type or transfer mode is recommended if you want to transfer text files. In general, files whose contents can be read using a simple text editor like Notepad, nano, or pico are considered text files. Note: Some text files, like those using UTF-8 character encoding, may contain characters not supported by ASCII. For example, Japanese, Chinese or Korean characters aren't supported. These text files are exceptions and should be transferred using binary mode. But why is it necessary to use the ASCII transfer mode? This is due to the way end-of-lines (EOLs) are handled. In FTP, EOLs in ASCII files (ex. text files) are denoted by carriage return+line feed (CRLF) pairs (see RFC959). The thing is, not all platforms use CRLF for end-of-lines. While Microsoft Windows does use CRLF, UNIX systems like Linux, FreeBSD, AIX, and Mac OS X don't. These systems only use LF for line endings. Some archaic systems, like the Commodore 8-bit machines, Mac OS up to version 9, and Acorn BBC, used only CR. So, in order for text files to be usable upon arrival at their destination platform, changes have to be made to line endings. If the sending platform is Windows and the receiving platform is Linux, then the sender won't have to make any changes. However, the receiver would have to remove the CRs. If the sender is on Linux and the receiver is on Windows, the sender would have to add CRs and the receiver wouldn't have to do anything. When you issue an ASCII command, each sender (whether the FTP client or server) will have to make the necessary changes. Incidentally, these changes involving CRs and LFs aren't suitable for binary files. That's the reason why the image file got corrupted after the download in the example. In fact, binary files can get corrupted even if both the client and server are running on either Linux, Mac OS X, or some other platform that automatically adds/removes CRs or LFs to line endings. While there are some text files that do fine when transferred using binary mode, there are those that really require ASCII mode. Scripts, for instance, can cause problems when transferred in binary. Some FTP clients are now capable of detecting the type of file to be transferred and automatically set the transfer mode accordingly. But if you're using a client that doesn't have that capability but supports a manual method of setting the transfer mode, what you've learned today should be helpful. But what if the client doesn't issue a TYPE command to specify the data type? Some servers, like JSCAPE MFT Server, allow you to set a default transfer mode. Get Your Free Trial Would you like to try this yourself? JSCAPE MFT Server is platform-agnostic and can be installed on Microsoft Windows, Linux, Mac OS X and Solaris, and can handle any file transfer protocol as well as multiple protocols from a single server. Additionally, JSCAPE enables you to handle any file type, including batch files and XML. Download your free 7-day trial of JSCAPE MFT Server now.
<urn:uuid:619bb593-9bf3-41ef-8b20-e7bf919dcc18>
CC-MAIN-2022-40
https://www.jscape.com/blog/ftp-binary-and-ascii-transfer-types-and-the-case-of-corrupt-files
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00149.warc.gz
en
0.899808
1,335
2.953125
3
An internationally recognized best practice for an information security management system (ISMS), ISO/IEC 27001 helps organizations to build resilience and protect information. So it’s no surprise that companies invest in training their people to get the knowledge and skills to use ISO/IEC 27001 to secure their business. If you’re new to ISO/IEC 27001 and need to take the lead on implementing a management system then this course is for you. You will learn the importance of an ISMS and get the vital skills to interpret and implement the requirements, carry out a gap assessment, as well as gain awareness of management tools and techniques. The five-day course is packed with practical activities, group discussion and classroom learning to help you retain the knowledge to implement an effective management system. It includes an exam on the final day and upon successful completion you will be awarded with the BSI ISO/IEC 27001 Lead Implementer qualification. Those who will be involved in advising top management on the introduction of ISO/IEC 27001 into an organization. Those planning to lead and implement a system, or new to managing a system Anyone working within information security, including consultants Course Objectives and Benefits An understanding of effective information security management throughout an organization and therefore protection of your information (through integrity, confidentiality and availability) and those of your interested parties. Develop vital processes, policies and procedures that can be put into practice immediately. Create the framework for your own Information Security Management System (ISMS). Gain knowledge to develop your ISMS framework and build awareness and support for information security across your organization. Be confident that you have the capability to protect your business and meet stakeholder expectations. Encourage continuous professional development across your organization. The key concepts and principles of ISO/IEC 27001:2013. The terms and definitions used. The main requirements of ISO/IEC 27001:2013 Identify a typical framework for implementing ISO/IEC 27001 following the PDCA cycle. Conduct a base line review of the organizations current position with regard to ISO/IEC 27001. Interpret the requirements of ISO/IEC 27001 from an implementation perspective in the context of their organization. Implement key elements of ISO/IEC 27001. Explain the concepts of leadership, elements of project management, managing organizational change, skill sharing and support/motivation during the implementation. Five years of Experience including minimum 2 years of information security work experience • Confidently implement and maintain an ISMS • Be prepared with management tools and techniques • Successfully carry out a gap analysis • Network with likeminded peers • Develop professionally and gain a recognized qualification Information Security Management(ISM) Background to ISO 27001/ISO 27002 Clause 4: Context of the organization Clause 5: Leadership Clause 6: Planning Clause 7: Support Clause 8: Operation Clause 9: Performance evaluation Clause 10: Improvement What is an ISMS? Terms and definitions Implementing a management system Requirements and documentation Baseline gap analysis Risks and opportunities Objectives and targets Monitoring, measurement, analysis and evaluation Internal audit and management review Nonconformity, corrective action process and improvement Leadership and management Eight disciplines problem solving Specimen exam paper Introduction to the exam Reflection and feedback
<urn:uuid:28aa43ff-8bb4-4bbf-83e5-afb7898217df>
CC-MAIN-2022-40
https://etac.ae/Courses/Details/ISO27001%20Lead%20Implem
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00149.warc.gz
en
0.865518
813
2.640625
3
This is a collection of notes intended to introduce the fundamentals of file systems. This section summarizes the challenges of using hard drives and the general objectives of file systems. Subsections introduce simple file systems that are for the most part obsolete today. When faced with a large expanse of storage space, whether solid state on a USB stick or on a hard drive, one has three problems: - How to store distinct groups of data in a unified manner – these are usually called files - How to find files – this usually leads to directories, or folders as they are called in graphical interfaces - How to manage the free space on the hard drive – avoid losing space to fragmentation or errors Over the decades, different file systems have produced different solutions to these problems. Usually the differences can be traced back to the following, sometimes mutually exclusive, objectives: - Easy to implement - FAST, FAST, FAST - Direct access versus sequential access of data in files - Support for hard drives of a particular maximum size As hard drives have grown in capacity, file systems have grown in complexity. Still, the systems’ weird features usually trace their origins back to the problems being solved or the particular objectives being pursued. If we look back into ancient history, when semi-trailor-sized behemoths were being out-evolved by refrigerator-sized creatures in university computer labs, we find many comprehensible file systems. - Classic Forth – as close to no file system as you can get - Digital’s RT-11 – very simple and “flat” file system - Boston University’s RAX Library – for “timesharing”
<urn:uuid:db1bdd44-604c-4112-9050-f6c9ab25fd6e>
CC-MAIN-2022-40
https://cryptosmith.com/r/file-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00149.warc.gz
en
0.938496
362
3.28125
3
READ TIME: 6 MINS. There’s an overwhelming amount of computer viruses and malicious software out there that it almost seems impossible to try and keep up with constant updates and variants. Have you ever wondered what malware was and how to avoid it? If this is your first time hearing the term malware, you’ve come to the right place. Before I dive-in on the best ways to protect against it, I’ll give you a little background on what it is. Malware is short for malicious software and refers to a type of computer program that is strictly designed to infect a user’s computer and cause infliction in various ways. Malware is powerful and can attack a computer in several forms, such as viruses, worms, Trojans, and spyware. I will explain each of these below. Malware is one of the biggest threats on the internet to date. With so many of us working from home now, it’s even more important to learn the best ways to keep your computer and network safe and secure. Often when a malware attack takes place, the user is unaware. So, in simpler terms, is malware just a computer virus? Kind of, but not really. A computer virus is just one type of malware, and even though you’ll hear the terms “malware” and “virus” interchangeable, they aren’t the same thing. Cyber Security: What Is Malware and How Does It Work? Even though malware comes in many different forms, it all follows the same pattern. Without knowing, the user downloads or installs the malware onto their computer or tablet, and that device then becomes infected. You may be wondering, “How do I know I’m downloading this harmful virus?” Well, you don’t always know. Sometimes you need to rely on your anti-virus software, your IT team, or IT provider to put the necessary preventive measures in place. The action of unknowingly downloading malware could be from clicking a link in an email or visiting a malicious website. However, you can also experience an attack through file-sharing services that online hackers have control over. Cybercriminals have the potential to embed types of malware so that it spreads from one user to another if people are sharing files, internally or externally. Oh, and by the way, mobile devices (such as smartphones) are not safe from this type of virus either, and can become infected in the same ways like a computer or tablet. Instead of the virus transferring through a computer network or critical infrastructure, it’s transferred through text messages, online advertising, and even direct from phone to phone. Another common form of malware is the use of a USB stick or a flash drive. You probably have many files stored on the USB stick that you wanted to transfer from your work computer to your home office as your business switches to a work from home (WFH) environment. If this is the case, you should pay careful attention. Because the harmful virus is loaded on the internal hardware of a device (opposed to its file storage), your computer is more unlikely to detect malware. So make sure that you are familiar with the USB drive that you are loading into your computer. Common Types of Malware and How They Differ For the most part, the majority of malware falls into the following classifications: - Virus: Similar to a virus you can contract from another person; computer viruses attach themselves to clean files and infect other clean files, spreading uncontrollably and damaging a system’s core functionality and deleting or corrupting files. Viruses typically appear as an executable file. - Trojans: Trojans disguise themselves by looking like legitimate software but act discreetly to create backdoors in your security, allowing malware to enter. Trojans received this name since they act much like the fabled story of the “Trojan Horse.” The tricky thing about Trojans is that they can also infect your computer by tampering with clean or new software without your knowledge. - Spyware: Spyware is pretty self-explanatory— it’s designed to spy on you. Spyware hides in the background and takes notes on what you do online, including passwords, credit card numbers, and surfing habits. It can then be accessed or sent directly to hackers so that they can have access to your private information and any of your online use. You can think of spyware as a virtual way of watching you, then robbing you. - Worms: Using network interfaces, worms infect entire networks of devices, either local or across the internet. Once infecting one machine, worms can travel and infect other machines. They call them worms because these types of software burrow from one machine or network to another. They cause damage by spreading and using your computer network and begin consuming bandwidth and overloading your network servers. - Ransomware: Ransomware is a type of malware that can lock down your computer, preventing you from even logging in to your machine. Ransomware can threaten to erase every single document and piece of information on your machine unless a ransom is paid to the owner. Even if you pay the ransom, there is no guarantee you will regain access to your information, it will be intact, they won’t try to request more money, or create a backdoor into your system for future access. Learn more about other common data and cloud security breaches by reading our article, Top 5 Common Data Security Breaches: Prevention. Cyber Security Best Practices: Protecting Against Malware Now that you have a better understanding of what malware is and how it affects your sensitive data, network, and files, you’re probably curious about some ways to prevent malware from occurring in the first place. Below are some best practices that I follow and always try to keep in mind to further protect my network and the overall network of my organization. - Don't click - if you don't trust the source of information explicitly, don't click the links offered in emails, pop-up messages, text messages, and on websites. - Don't download - unless you are on a secure and respectable website, don't download any files or programs. - Look for the lock - websites without padlocks near the URL aren't secure. Any information you download or copy from them could be infected with malware. - Install a firewall - firewalls block unauthorized access and provide an extra barrier against malware. - Keep software update - make sure all your apps and anti-virus software is updated to use the freshest weapons against malware. In some cases, malware can be removed as your device is restored to normal or factory settings. Unfortunately, this is not the case for all types of malware, depending on how badly your device is hit. Partnering with an experienced and well-versed managed IT services company is one of the best ways to prevent malware from occurring or quickly taking care of the matter once the virus occurs. A great managed IT company will provide you an extra layer of network security through anti-malware protection software or anti-virus protection software. It’s helpful to have a team of experts to rely on when it comes to the security of your network so that you can focus on other pressing tasks. Learn more about managed IT services and how your business can benefit by reading our article, Cybersecurity Through Managed IT Services: A Safer Business. The Final Say On Protecting Against Malware Malware (along with other malicious codes and cyber attacks) is such a dense topic that can affect businesses and employees in many different ways. As you continue to work from home, or if you’re still operating out of your office, make sure to take advantage of these pointers. The best thing you can do for your organization is to keep it safe and protected from outside threats and attackers. At AIS, we’re not just a technology company. We’re a company that is dedicated to equipping you with the knowledge and tools you need to continue on your path of growth and success. Our goal is to always provide you with new and emerging data technology solutions for your business, customers, and employees. To learn more about our products and services, reach out to one of our business technology consultants. We’re here to give you peace of mind to help you win more business.
<urn:uuid:833478e8-71c2-48f1-93a6-b45c23d0ec69>
CC-MAIN-2022-40
https://www.ais-now.com/blog/cyber-security-what-is-malware-and-how-to-avoid-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00149.warc.gz
en
0.934432
1,747
3.21875
3
The cloud might be a perplexing location to install apps. Every cloud service claims its distinct advantages, making it difficult to determine which is genuinely best for you. In the early days of cloud computing, there were only two cloud deployment methods: public and private. Now we have a plethora of new alternatives to pick from. However, the options might be massive. Your company's evolving demands will be the final source of inspiration for picking a supplier and deployment methodology. But what is a cloud computing deployment model, and how do you pick one? Cloud Computing definition There are numerous definitions of cloud computing; to name two – “Cloud computing is the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for the cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change. " – Microsoft “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” – National Institute of Standards and Technology (NIST) Various definitions exist, but one thing is certain: it is a service that includes various cloud deployment models, each with its own set of advantages and disadvantages. Benefits of Cloud Computing Like web-based email clients, which have been around for more than 20 years, cloud computing enables users to access all of the system's functions and information without having to store most of that system on their own computers. Gmail, Google Drive, One Drive, Amazon Web Services, and others are examples of cloud services that people use on a daily basis. Even Facebook and Whatsapp are also cloud-based services. Cloud computing has many advantages for your company. You may set up what is basically a virtual office with it, giving you the freedom to connect to your business from any location at any time. Access to your data is made even simpler by the rising use of web-enabled devices in today's corporate environment (such as smartphones and tablets). There are a lot of advantages to adopting and moving your company to the cloud. Some key advantages of cloud computing are - - Cost Savings - Competitive Edge - Loss Prevention Cost Savings - Using cloud computing could make managing and maintaining your IT systems less expensive. You can cut costs by utilizing the facilities of your cloud computing service provider rather than investing in pricey systems and equipment for your company. Security - When it comes to implementing a cloud computing solution, many organizations are concerned about security. Sharing and Accessing your files from anywhere on the internet could give the opportunity for threat actors also to access your company's sensitive data. To fix this issue cloud service providers do have a dedicated IT team whose main job is to carefully monitor security threats and loopholes. Additionally, most cloud service providers adopt end-to-end encryption for transmission over networks and storage in databases. By using encryption, information is less accessible by hackers or anyone not authorized to view your data. Competitive Edge - Cloud computing gives you a competitive advantage over your rivals. One of the main benefits of cloud services is that you may get the most recent programs at any time without having to invest your time and money in installation. Mobility - Through smartphones and other devices, cloud computing enables mobile access to corporate data. It is a great way to ensure that no one is ever left out of the loop. This function allows employees with busy schedules or who reside far from the corporate headquarters to stay immediately in touch with clients and coworkers. Due to the COVID-19 pandemic, almost every organization, and firm including Google, Microsoft, etc. have been affected and to come out with this they adopt work from home policy. Woking from home or remotely to complete the task is been possible due to cloud computing. Employees may simply access all available services whether they are working on-site or in remote locations. All they require is Internet access. Loss Prevention - All of your important data is inextricably stored in the workplace PCs it lives in if your company doesn't invest in a cloud computing solution. Even though it might not seem like a concern, if your local hardware has a problem, then there is a risk of permanently losing your data. So you should use and store your all precious data in the cloud. When data is kept in the cloud, it is simpler to back it up and restore it, which is a labor-intensive procedure when done on-premises. Models of Cloud Computing In its definition of cloud computing, NIST has also mentioned four types of cloud deployment models, viz. - Public cloud - Private cloud - Hybrid cloud - Community cloud The location of the deployment infrastructure, the kind of party in charge of it, and the setting of its characteristics, such as the quantity of storage and accessibility, all contribute to defining a cloud deployment model. The following definitions apply to the various deployment models: ● A public cloud: a publicly accessible cloud where the data is kept on servers owned by other people. Cost-effectiveness, accessibility, scalability, and ease of setup and usage are advantages of this form of cloud computing; disadvantages include security concerns, a lack of customized customer service, and diminished reliability. ● A private cloud: a private cloud that can be accessed. Technically speaking, the only distinction between public and private clouds is that a private cloud is controlled by a business. A personalized approach to consumers, more control over sensitive company data, and improved security and dependability are all advantages of cloud computing in private clouds. Of course, the cost is the main drawback. ● Hybrid cloud: This provides a hybrid of public and private cloud infrastructures and functionalities, combining the finest aspects of both cloud deployment approaches. The advantages include greater security and dependability, increased scalability, and fair pricing. The problem with this concept is that it only works if a corporation can partition its data and store it in various clouds. ● Community cloud: With a typically small number of customers, this kind of cloud offers computer resources that are shared with other businesses and organizations. Consequently, the model mirrors a private cloud. Here, shared costs that lower the cost and increased compliance are advantages. Community cloud computing's key drawback is that it is more expensive for individuals or small businesses to use than public clouds. The benefits of engaging in cloud deployment models for all your business purposes are manifold. All you need is a high-quality service-delivering cloud computing company. Businesses are increasingly focusing on obtaining a type of cloud computing that not only provides high-quality utility but also a secure location for their sensitive data. Why should your business use cloud computing? For a number of reasons, a business should think about implementing its services or apps on the cloud. First, since there is no need to buy servers (or other infrastructure), it can help the business save money overall by reducing labor expenses. If the business sees higher demand than anticipated, it may also result in enhanced scalability. The advantages of cloud computing are already widely recognized, and we list their most appealing characteristics below.Cost-cutting: Businesses mostly look for ways to cut expenses and obtain more for less money. Clouds negate buying gear or software, setting up an additional power source, or budgeting for capital expenses. Improved efficiency: Better performance, greater potential for automation, more productivity, and speed are some other benefits of cloud computing. Scalability: Businesses may accomplish sustainable development objectives by taking advantage of the scalability and expanded capabilities, which can offer benefits for business continuity. Productivity: Fast and effective data backup and recovery provide a definite advantage over on-premises systems.
<urn:uuid:6a0a98f7-4188-421a-8670-2e51364cc5aa>
CC-MAIN-2022-40
https://www.cyberkendra.com/2022/06/reasons-to-adopt-cloud-computing-in.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00149.warc.gz
en
0.94253
1,645
2.875
3
The manufacturing industry has overwhelmingly embraced automation and digitization, capitalizing on the improved efficiency in controlling and monitoring operations. However, the efficiency comes with a catch: a higher risk of cyberattack from modern malware technology. Ransomware is one of the most common forms of malware. It allows hackers to access and cripple computer systems. Then, they keep these systems down till the affected parties pay a ransom. Some hackers may opt not to disable the systems, but encrypt the data, making it inaccessible to the user. In 2020 ransomware attacks against individuals reduced, but attacks against businesses increased because they're more profitable and a company is more likely to pay. The ransom per attack rose from around $100,000 to over $300,000. Cybercriminals are drawn to the most vulnerable and profitable sectors, which is probably why in 2020 the manufacturing sector received 17 percent of the attacks on businesses and organizations. A digitized manufacturing plant usually interconnects IT systems, providing hackers with access to monitoring systems, designs, intellectual property and procurement data with one or few attacks. There are three main reasons why these facilities are a gold mine for hackers. 1. A Higher Probability of Getting Paid: Access to all these systems and data is great leverage to push the company to pay a ransom, and downtime resulting from such an attack can cause extensive damage. 2. High-Value Data: A ransomware attack involves denying the user control of their systems. But modern hackers sometimes incorporate data-extraction malware as well. So, suppose they get sensitive data such as intellectual property. In that case, the hackers can extort more money from the manufacturer, or sell it to competitors. 3. Extorting Third Parties: Sometimes, the extracted data is about third parties, like suppliers, clients and partners. The hackers can use the data to demand a ransom from those entities as well. Check Point calls this tactic, which became popular in 2020, Triple Extortion Ransomware. How to Protect Against an Attack A successful ransomware attack is every manufacturer’s nightmare. Fortunately, there are steps organizations can take to protect their systems. 1. Update software regularly: Outdated software and operating systems have lower security levels. Thus, they’re more vulnerable to cyberattacks. Also, install patches regularly, as this will help the manufacturer to counter the sophisticated technologies hackers use. 2. Train employees: Employees should understand the company’s security systems and know what they can do to protect the organization. Hackers often penetrate systems through phishing. An employee who understands information security would be less likely to fall for it. 3. Intrusion Detection and Prevention Systems: IDS identify malware that has infiltrated the IT network, removing them before they cause damage. Intrusion prevention systems (IPS) are proactive. They stop the malware from even accessing the system. An IPS identifies suspicious programs that could be ransomware and blocks them. It can also block a suspicious user’s IP. Both systems alert the information security team whenever they detect a threat. 4. Use backup systems: Backup systems and data help companies resume operations as soon as possible. Manufacturers should keep both online and offline backups, updating and testing them regularly. Should Victims Pay? The ransom amount demanded by hackers is usually painful. But it’s usually a small fraction of the value of damage that the company would face if it does not restore systems. So, without a means to regain control of the systems, many company executives opt to pay. Paying is not illegal, but is it ethical? Some critics argue that paying encourages the hackers to continue attacking other victims. Manufacturers in the healthcare and energy production sectors are too sensitive. They cannot allow their operations to stay down for long. As a result, such companies almost always pay. The threat of ransomware and other cyberattacks will keep growing as manufacturers continue to digitize, automate, and integrate their systems, which means you must take precautions and keep upgrading systems. Sam Meenasian is the Operations Director of USA Business Insurance and BISU Insurance.
<urn:uuid:d4b48801-51e1-4727-b26e-0ba2b85022ee>
CC-MAIN-2022-40
https://www.mbtmag.com/security/blog/21509252/why-ransomware-is-a-major-threat-to-manufacturing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00149.warc.gz
en
0.947355
838
2.90625
3
There is a common misconception that cybersecurity fundamentally relates to the implementation and management of technical and non-technical control measures - installing firewalls, doing pen tests and implementing security awareness programmes. Whilst all of these are valid activities, much like any business activity cybersecurity is really about understanding the risks your business faces and putting mitigations in place to reduce that level of risk to an acceptable level. Within the cybersecurity industry assessing risk is (hopefully!) a natural first step that then drives all other activities. However, we see that it is something that doesn’t necessarily come naturally and that it is something that needs continually promotion in the wider industry. The first step is to be able to define what risk is. More generally, the risk is “the effect of uncertainty on objectives” (ISO 31000) - i.e. you have something you’re trying to achieve, like building a bridge, but despite all the surveys you cannot be certain you’re not going to hit unfavourable soil conditions when you start digging. Of course, the effect of uncertainty can be a positive thing too, and you may be able to complete the project under time/budget. Within cybersecurity we narrow the definition slightly, so risk becomes “a measure of the extent to which an entity is threatened by a potential circumstance or event”, but really we can think of it in the same way. We have a business objective to achieve, such as transferring data to a third party, and we want to make sure that any “uncertainty”, or things outside our control such as someone trying to read the data as it flows on the wire, is mitigated to an appropriate extent. One of the things dealing with COVID-19 over the last eighteen months has shown is that people’s reaction to risk varies widely. Given the same set of data, recommendations and guidelines some people adopt a very risk-averse posture, whereas others prioritise continuing their normal activities. Much like cybersecurity, it can be difficult to judge the right path to follow when faced with mixed messages, marketing and peer pressure! Humans are not great at judging risk, we tend to be unconsciously biased in many ways, and this is rooted in our psychology. For example, if you are doing something voluntarily - like playing sports - you’ll accept a much higher level of risk than when doing something you cannot control, like the air quality around you. The reactions to risk in business tend to be one of two extremes: either becoming ossified and unable to evolve or becoming incredibly lax with ever-increasing complexity. We all know of, or have heard of, large organisations where getting anything done is a huge challenge, with staff working around processes and procedures to complete their work - introducing risk in the process. Likewise, as complexity grows the organisation and its technology becomes incomprehensible, again with associated risk. In our experience, identifying and mitigating risk is something that all organisations have had to grapple with, but it is never easy. Here are some observations based on our experiences helping companies in the maritime space, for whom much of this is relatively new as far as it's applied to cybersecurity: 1. All companies are technology companies Historically, a technology company would be one developing new products or technologies to launch into the market - equipment manufacturers perhaps. However, this is becoming the case less and less. The reliance of companies on accurate and timely data about their operations is continually growing - for example through the complexity of logistics operations, or the proliferation of sensing systems. The reliance of companies on the technology they use can especially be seen through the impact that ransomware has been having over the last five years: through denial of access to technology companies have had to pause or even cease their operations. Having an accurate view of your reliance on technology is essential to being able to accurately understand the risks you might face. 2. Identifying risk needs to be up-front At Nettitude, we are often asked to assess the security of products or systems as the final step before they are deployed. In a recent case, we were asked to assess a network product after it had been fully developed and was launched on the market. During our assessment, we found - as is often the case - a mix of high and low-risk issues. Although the product vendor was able to fix the major issues, the minor issues required hardware changes or major changes to the way the product is architected. This would have been simple and easy to do during the development phase but was not cost-effective to do after the fact. By ensuring that risk is assessed at the start of a project costly mistakes can be avoided. 3. Mitigations need to be in-depth Nettitude spends a lot of time carrying out ‘Red-teaming’ exercises, where we emulate an adversary targeting an organisation and attempt to gain access to a critical system or data. The team is often highly successful, but there is never one vulnerability that is responsible for this. Instead, a chain of smaller vulnerabilities is found and chained together to move from outside the organisation to the target. In the previous example, although the ‘high’ risk vulnerabilities were mitigated, the ‘low’ risk vulnerabilities remained. To adequately mitigate risks you identify they need to be considered in a holistic context and not individually. For example, an item that is low risk on its own may become a higher risk in the presence of other risks. Mitigations, therefore, need to be applied in-depth - something which is only cost-effective is risk identification is done at the start of the project. 4. Assessment needs to be continuous We often find that the understanding of a system or its implementation diverges from the real-world situation. For example, we’ll visit a vessel with a network diagram and find a situation on board that has had extra components added or upgraded. Likewise, asking even a simple question like ‘how many computers do you have?’ can yield wildly different answers depending on whether you ask the team managing the hardware, the network or the anti-virus. Likewise, the industry’s understanding of the risks posed by different technologies evolves. The clearest example of this is in cryptography, where new attacks on cryptographic cyphers and advances in hardware mean that recommended best practices that were sufficient ten years ago may now no longer be adequate. All of this adds up to risk assessment needing to be something that is carried out at the start of a project - but then iterated throughout the life of the system, project or business venture. This is something that cannot necessarily scale to be delivered by people alone - and here at Nettitude, we are working on ways of doing this using artificial intelligence and other automation techniques to help our customers achieve this. Most attacks on organisations are not directly targeted. Unless the attacker is trying to get access to something unique to you - such as intellectual property - they are typically trying to extract maximum profit for minimum effort (much like any business!). Attackers are going after the weakest target they can easily find - for example, someone running vulnerable software. At the moment, this doesn’t appear to be specific to the technology used solely by the maritime industry, but the wider industry is starting to raise the costs for attackers. This means that the maritime industry needs to be similarly raising the bar for potential attackers to ensure they don’t become the next target of attacks. How is Nettitude able to assist? We provide independent assurance and threat led maritime cybersecurity services to marine and offshore organizations around the globe. Find out more about our services here This was originally presented as a talk at the Plymouth University Cyber-SHIP Lab Annual Symposium 2021
<urn:uuid:aa205f9f-1c89-443c-b7e5-e0062b19dd27>
CC-MAIN-2022-40
https://blog.nettitude.com/maritimecyber
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00349.warc.gz
en
0.967989
1,589
2.625
3
Since spectrum auctions in the mobile sector first became widespread in the 1990s, they have tended to generate one of two headlines. Either there is a “spectrum bonanza”, where more money is raised than expected, or “Government loses out” by getting less money than it was hoping for. In this respect, the record-setting high prices that UK and German operators paid for 3G spectrum almost 20 years ago still seems to influence some of the expectations amongst governments and industry analysts. But when analysing spectrum awards, we should not lose sight of the impact on consumers. A debate on this topic has persisted for years, with lots of discussions about economic theory and sunk costs. Do high spectrum prices – especially those driven by government policies rather than market demand – have an impact on the amount that operators invest in their networks or on the prices they charge to their customers? Or can spectrum awards be used to generate more revenues for public services without harming consumers of mobile services? In fact, very little evidence has been gathered to determine how consumers are affected by spectrum prices. The research that has been carried out is generally inconclusive. We therefore recently published a study that isolates the impact of spectrum pricing on consumer outcomes, including network coverage, quality and mobile prices. Looking at 229 operators in 64 countries (including both developing and developed markets) over the period 2010-2017, the research presents strong evidence of a causal link between high spectrum prices and negative consumer outcomes. Specifically, we found that: High spectrum costs played a significant role in slowing the roll-out of next generation mobile networks in both developed and developing countries; More expensive spectrum reduced network quality, as measured by download and upload speeds and latencies; Countries that released spectrum early and in larger quantities saw quicker 3G and 4G network roll-out than countries that released spectrum later and/or in smaller amounts; High spectrum costs are associated with higher consumer prices in developing countries, though this is not conclusive and so further research is needed. What does this mean for spectrum policy? First, as economists are often fond of saying, “there is no free lunch”. governments that want to maximise revenues from spectrum auctions can continue to pursue this as an objective but they will now do so in the knowledge that it will have a negative impact on the development of mobile services. This is incompatible with other objectives to expand access to 4G and 5G as enablers of economic growth and sustainable development. Ultimately, policy-makers have to make a decision on what trade-offs they are willing to accept. Second, auctions can deliver inefficient outcomes when they are poorly designed. One example of this is in relation to reserve prices. If these are set too high then precious spectrum may go unsold – as in the case of India (2016), Bangladesh (2018) and Ghana (2015 and 2018) – or force operators to pay more than they would otherwise. Another example is when governments artificially limit the supply of spectrum to operators, for example through set-asides (such as the German 3.5GHz auction in 2019) or large and mismatched lot sizes (such as the Italian 3.5GHz auction in 2018). The lesson from this is that auctions will not automatically deliver an efficient outcome if they are designed to achieve several objectives. Policy-makers must decide what their priorities are. Lastly, spectrum should be released to the market as soon as there is a business case for operators to use it. In a market where long-term value, innovation and cost reductions are driven through short technology cycles (5G has been launched within ten years of the first 4G LTE networks), unnecessary delays to spectrum awards risk harming network roll-outs and leaving people behind. An up-to-date spectrum roadmap also alleviates uncertainty. As we move into the 5G era, in many countries the temptation to maximise spectrum revenues will remain – the potential sums involved are often too large to ignore. But if governments want to ensure that spectrum is utilised to support affordable, high quality mobile services for the benefit of citizens, then the best way of achieving this is for operators to pay market-driven prices for spectrum that are not distorted by auction design or other policies. We’ll know we are moving in the right direction when ‘high’ and ‘low’ auction revenues stop making the headlines and instead generate the same level of media interest as other aspects of spectrum management policy. Kalvin Bahia – Economist – GSMA Intelligence The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
<urn:uuid:da38ca0a-1ffc-4c40-9485-43ceca37e789>
CC-MAIN-2022-40
https://www.gsmaintelligence.com/2019/10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00349.warc.gz
en
0.954039
1,024
2.546875
3
The Vernam Cipher is an algorithm invented in 1917 to encrypt teletype (TTY) messages. So named for Gilbert Sandford Vernam, it is a symmetric cipher patented July 22, 1919. The Vernam Cipher combines plaintext (the original message) with pseudo-random series of polyalphabetic characters to form the ciphertext using an “exclusive or” (XOR) function. US Army Captain Joseph Mauborgne soon discovered that the cipher could be made much stronger by using truly random numbers printed on pads of paper. Streams of paper with the random numbers in that fashion became a process known as “one-time pad”. The Vernam using one-time pad is regarded as unbreakable. On background, a teletype is a character printer connected to a telegraph that provides a user interface for people to communicate over various communications protocols such as dedicated or public wires, radio, or microwave. “The Vernam Cipher with one-time pad is said to be an unbreakable symmetric encryption algorithm in part because its key-exchange process uses true random number generation and secure key distribution.”
<urn:uuid:316bd95d-5805-4078-bac3-1fcb7648bcb2>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/vernam-cipher
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00349.warc.gz
en
0.931166
246
3.765625
4
There are two amendments of the IP convention that are broadly executed on frameworks today. Ipv4, which is the fourth form of the convention, right now is the thing that the lion's share of frameworks backing. The more up to date, sixth update, called Ipv6, is, no doubt took off with more noteworthy recurrence because of enhancements in the convention and the confinements of Ipv4 location space. Basically, the world now has an excess of web associated gadgets for the measure of locations accessible through Ipv4. Ipv4 locations are 32-bit addresses. Every byte, or 8-bit fragment of the location, is isolated by a period and regularly communicated as a number 0-255. Despite the fact that these numbers are normally communicated in decimal to support in human appreciation, each one fragment is typically alluded to as an octet to express the way that it is a representation of 8 bits. IP locations are normally made of two different parts. The principal piece of the location is utilized to distinguish the system that the location is a part of. The part that comes subsequently is utilized to tag a particular have inside that system. Where the system determination finishes and the host particular start relies on upon how the system is arranged. We will examine this all the more completely almost instantly. Ipv4 locations were generally partitioned into five separate "classes", named A through E, intended to separate sections of the accessible addressable Ipv4 space. These are characterized by the initial four bits of each one location. You can recognize what class an IP location has a place with by taking a gander at these bits. A Class A, B, or C TCP/IP system might be further isolated, or subnetted, by a framework executive. This gets to be essential as you accommodate the coherent location plan of the Internet (the theoretical universe of IP locations and subnets) with the physical systems being used by this present reality. A framework executive who is designated a square of IP locations may be managing systems that are not sorted out in a manner that effortlessly fits these locations. Case in point, you have a wide zone system with 150 hosts on three systems (in diverse urban areas) that are joined by a TCP/IP switch. Each of these three systems has 50 hosts. You are dispensed the class C system 192.168.123.0. (For outline, this location is really from an extent that is not designated on the Internet.) This implies that you can utilize the locations 192.168.123.1 to 192.168.123.254 for your 150 hosts. Two addresses that can't be utilized as a part of your sample are 192.168.123.0 and 192.168.123.255 in light of the fact that twofold locations with a host share of all ones and all zeros are invalid. The zero location is invalid on the grounds that it is utilized to determine a system without pointing out a host. The 255 location (in twofold documentation, a host location of all ones) is utilized to telecast a message to each host on a system. Simply recollect that the first and last address in any system or subnet can't be allocated to any individual host. You ought to now have the capacity to give IP locations to 254 hosts. This works fine if each of the 150 machines are on a solitary system. On the other hand, your 150 machines are on three different physical systems. As opposed to asking for more address squares for each one system, you partition your system into subnets that empower you to utilize one square of locations on various physical systems. Every gadget on a system must be exceptionally characterized. At the Network layer, the bundles of the correspondence need to be related to the source and goal locations of the two end frameworks. With Ipv4, this implies that every bundle has a 32-bit source location and a 32-bit goal address in the Layer 3 header. These locations are utilized as a part of the information arrange as paired examples. Inside the gadgets, computerized rationale is petitioned their understanding. For us in the human system, a string of 32 bits is hard to decipher and significantly more hard to recall. Hence, we speak to Ipv4 locations utilizing spotted decimal organization. For every Ipv4 address, some part of the high-request bits speaks to the system address. At Layer 3, we characterize a system as a gathering of has that have indistinguishable bit designs in the system location segment of their locations. Albeit each of the 32 bits characterizes the Ipv4 host address, we have a variable number of bits that are known as the host bit of the location. The quantity of bits utilized as a part of this host share decides the quantity of has that we can have inside the system. Case in point, on the off chance that we have to have no less than 200 hosts in a specific system, we would need to utilize enough bits as a part of the host segment to have the capacity to speak to no less than 200 diverse bit designs. To allot an exceptional location to 200 hosts, we would utilize the whole last octet. With 8 bits, an aggregate of 256 diverse bit examples could be attained. This would imply that the bits for the upper three octets would speak to the system parcel. Figuring out how to change over double to decimal obliges an understanding of the numerical premise of a numbering framework called positional documentation. Positional documentation implies that a digit speaks to diverse qualities relying upon the position it involves. All the more particularly, the esteem that a digit speaks to is that esteem duplicated by the force of the base, or radix, spoke to by the position the digit involves. A few illustrations will help to illuminate how this framework functions. For the decimal number 245, the esteem that the 2 speaks to is 2*10^2 (2 times 10 to the force of 2). The 2 is in what we ordinarily allude to as the "100s" position. Positional documentation alludes to this position as the base^2 positions in light of the fact that the base, or radix, is 10 and the force are 2. An imperative inquiry is: How would we know what number of bits speaks to the system parcel and what number of bits speak to the host bit? When we express an Ipv4 system address, we add a prefix length to the system address. The prefix length is the quantity of bits in the address that provides for us the system share. For instance, in 172.16.4.0/24, the/24 is the prefix length - it lets us know that the initial 24 bits are the system address. This leaves the staying 8 bits, the last octet, as the host bit. Later in this section, we will take in more about an alternate element that is utilized to define the system bit of an Ipv4 location to the system gadgets. It is known as the subnet veil. The subnet veil comprises of 32 bits, pretty much as the location does, and utilizes 1s and 0s to demonstrate which bits of the location are system bits and which bits are host bits. Systems are not generally doled out a/24 prefix. Contingent upon the quantity of hosts on the system, the prefix relegated may be diverse. Having an alternate prefix number changes the host go and show address for each one system. Recognize that the system location could continue as before, however the host extent and the show location are diverse for the distinctive prefix lengths. In this figure you can likewise see that the quantity of has that could be tended to on the system changes too. The ARP Cache (Neighbor Cache) can contain two fundamental sorts of passages; Permanent entrances (Static sections), and Dynamic passages. Dynamic passages will be demonstrated as Incomplete, Reachable, Stale, Delay or Probe. It is suggested that you utilize Dynamic entrances as default; however there are a few occurrences where a Static section is exhorted. A case would be when decommissioning a server and supplanting it with an alternate server imparting the old DNS Name and IP Address. In this case, you may think about utilizing as an impermanent static mapping for the new server in your Router ARP Tables. At that point erase the old ARP Cache on your servers. The issue is that the switch's substitute ARP solicitations give back where it's due MAC location to the sending host. Thus, the sending host sends its activity to the wrong MAC address. As it were, the issue originates from substitute ARP answers. To address this issue, utilization Network Monitor utilizes a catch a follow. On the off chance that the follow uncovers that when a sending host sends an ARP demand for the MAC location of an objective IP address, a gadget (normally a switch) answers with a MAC address other than the goal's right MAC address. To figure out whether this is the issue, check the ARP store of the source host to verify it is getting the right IP location to MAC address determination. On the other hand, you can catch all activity with Network Monitor and later channel the caught movement to show just the ARP and RARP conventions. The RARP convention changes over MAC locations to IP addresses and is characterized in RFC 903. You can settle the ARP issue by debilitating 'Substitute ARP' on the culpable gadget. Precisely how this is carried out relies on upon the gadget's make and model; counsel the producer's documentation. The DHCP convention supplies programmed arrangement parameters, for example, an IP address with a subnet veil, default portal, DNS server location, and Wireless Integrated Network Sensor (WINS) location to has. At first, DHCP customers have none of these arrangement parameters. With a specific end goal to acquire this data, they send a telecast demand for it. At the point when a DHCP server sees this ask for, the DHCP server supplies the fundamental data. Because of the way of these show asks for, the DHCP customer and server must be on the same subnet. Layer 3 gadgets, for example, switches and firewalls don't ordinarily forward these show asks for as a matter of course. Check that the interface on the Windows 2000 Router that associates with the system where the DHCP customers are spotted is added to the DHCP Relay Agent IP steering convention. Check that the Relay DHCP parcels check box is chosen for the DHCP Relay Agent interface associated with the system where the DHCP customers are placed. Check that the IP locations of DHCP servers designed on the worldwide properties of the DHCP Relay Agent are the right IP addresses for DHCP servers on your internetwork. From the switch with the DHCP Relay Agent empowered, utilize the PING utility to ping each of the DHCP servers arranged in the worldwide DHCP Relay Agent dialog. In the event that you can't ping the DHCP servers from the DHCP Relay Agent switch, troubleshoot the absence of integration between the DHCP Relay Agent switch and the DHCP server or servers. Confirm that IP bundle separating is not keeping the accepting (through info channels) or sending (through yield channels) of DHCP movement. DHCP movement utilizes the UDP ports of 67 and 68. Confirm that TCP/IP sifting on the switch interfaces is not keeping the getting of DHCP activity. An endeavor to place DHCP customers and a DHCP server on the same subnet may not generally be advantageous. In such a circumstance, you can utilize DHCP hand-off. At the point when the DHCP hand-off executor on the security apparatus gets a DHCP demand from a host on an inside interface, it advances the appeal to one of the determined DHCP servers on an outside interface. At the point when the DHCP server answers to the customer, the security apparatus advances that answers back. Therefore, the DHCP transfer executor goes about as a substitute for the DHCP customer in its discussion with the DHCP server. System activity now and again falls flat on the grounds that a switch's substitute ARP solicitation furnishes a proportional payback address. A switch makes this ARP ask for an IP address on its internal subnets (pretty much as a remote access server makes a solicitation on the LAN for its remote access customers). SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from email@example.com and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:33a74939-0154-4fb8-a344-34366effef79>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/ccnp-troubleshoot-ipv4-addressing-and-subnetting.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00349.warc.gz
en
0.93409
2,632
3.234375
3
Flaws are everywhere A vulnerability is a weakness found in a software program, which can be exploited to perform unauthorized actions within a computer system. Software development is not a perfect process, and it never will be. Developers are not always used to thinking safety first, and even when they are, it is impossible to think of everything upfront. They do their best to design secure products but may not be able to unveil all hidden weaknesses before an anticipated release date. As a result, code will always be inherently flawed, and vulnerabilities exist in all types of software. Cat and mouse game Attackers are persistent in their quest to outsmart developers by punching holes in their code, and it can feel like a never-ending battle. This battle is being fought even more fiercely now that IT has involved from simple standalone systems into a complex network of connected services. These services consist of different systems either developed in-house or sourced from different vendors, with each component fulfilling its specific task in the overall service architecture. Keeping this heterogeneous environment operational and protected has shown to be a massive challenge. Vulnerability Risk Management Vulnerability Risk Management is the process of identifying, assessing, prioritising and remediating vulnerabilities based on the risk they pose to your organisation. We need a platform that is not only able to discover all the vulnerabilities within your constantly changing and heterogeneous IT environment, but also provides the capabilities required for vulnerability prioritisation and remediation. The wrong solution could leave you scrambling to keep up with endless lists of non- prioritized vulnerabilities, or worse, give you a false sense of security. The key to success lies in implementing an end-to-end vulnerability and patch management process that allows you to monitor progress in terms of risk reduction.
<urn:uuid:1e307e69-a1b5-42d7-abc4-b8f2aca58efd>
CC-MAIN-2022-40
https://davinsi.com/security-intelligence/vulnerability-management-services/vulnerability-risk-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00349.warc.gz
en
0.946044
364
2.53125
3
Given that there is such a thing as World Password Day, it makes a lot of sense for businesses to consider the nature of their passwords carefully. Passwords are meant to restrict unauthorized access to accounts, documents, and devices. This means that they are worthless if at all people can quickly figure them out. Here are several steps as to how you can create a strong password tool and beat the hackers: Setup a Longer Password Usually, when setting up a password for some accounts like email, there is a minimum number of characters that should cut it for your password. However, other than that, the strength of your password has a lot to do with the length. The appropriate estimate should range between 8-15 characters long. Don’t Use Dictionary Words Hackers have a way to crack codes, and by using dictionary words, you make everything easier for them. A strong password is not one that has proper grammatical spelling or befits your favorite name on the dictionary. At the least of things, if you have to choose words from the dictionary, try to mix them up in an order that does not resonate with spoken language. For example, a password like ‘iloveyou’is not as strong as ‘loveeatcome.’ Also, avoid one-word passwords as they are the easiest to crack, and such a breach may end up undoing all the work your MySQL backup tool has achieved for your company. Mix up the characters Instead of just words or just numbers, mix up characters. It is much safer to have a combination of numbers, symbols, and letters for your password. This way, you can even use a word from the dictionary, as long as you add in other characters to mix things up. Example, ‘[email protected]#v3e!y*o&u’. While at it, mix up the capital letters and small letters, for extra complexity. Keep away from your family and friends Any idea to use your family member’s name for your passwords is a bad idea. A lot of the times when hackers are targeting your business, they most likely have done some research enough to gather some details about your close family. Therefore, avoid using names, even just for the fact that names are widespread and easy to figure out. Avoid subsequent characters To the best of your ability, avoid using characters that follow each other on the keyboard, the alphabet on numbers. For example, a password like ‘123456’ is a poor choice, and so is the case for ‘abcd’ or ‘qwerty.’ Using Two-Factor Authentication Even with a very complex password in place, the two-factor verification is an added security measure you need. This allows you to approve any sign-ins and access to accounts and devices. Even after typing in the correct password, you need the code from sent to your cell phone to verify the access. This gives you more over who accesses your data, and when they to. Different passwords for different places The more there is a consistency in the passwords you choose for all your accounts and devices, the easier it is for hackers to figure them out. Regardless of whether or not you make your passwords long and complex if you use the same one for everything. You stand to lose. Pro tip: use a Password Manager. As a business, there are tons of opportunities for you to use passwords. Since we have determined that you need different passwords for all accounts, consider a password manager that can help you safely store all your passcodes, for instance, LastPass.
<urn:uuid:ea21be7f-7d0d-41b2-adfe-fbbe3cbffee3>
CC-MAIN-2022-40
https://latesthackingnews.com/2019/03/29/how-to-create-a-strong-password-and-beat-the-hackers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00349.warc.gz
en
0.947297
744
2.546875
3
It consists of four fields each of 2 bytes in length – Source Port(16 bits) – This field identifies the sender’s port. Cleared to zero if not used. Destination Port(16 bits) – This field identifies the receiver’s port. Length(16 bits) – The length in bytes of the UDP header and the encapsulated data. The minimum value for this field is 8. Checksum(16 bits) – The checksum field may be used for error-checking of the header and data. This field is optional in IPv4, and mandatory in IPv6. The field carries all-zeros if unused. Needed with zero bytes at the end to make a multiple of two bytes. If the checksum is cleared to zero, then checksum is disabled. If the computed checksum is zero, then this field must be set to 0xFFFF.
<urn:uuid:34f382fe-38c1-438c-87d7-91aee94ea1c7>
CC-MAIN-2022-40
https://networkinterview.com/udp-header/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00349.warc.gz
en
0.776549
196
2.703125
3
Intel and the University of Pennsylvania (UPenn) are training artificial intelligence models to identify brain tumours – with a focus on maintaining privacy. The Perelman School of Medicine at UPenn is working with Intel Labs to co-develop technology based on federated learning, a machine learning technique which trains an algorithm across various devices without exchanging data samples. The goal is therefore to preserve privacy. Penn Medicine and Intel Labs have claimed they were first to publish a paper on federated learning in medical imaging, offering accuracy with a trained model to more than 99% of a model trained in a non-private method. Work which will build on this, according to the two companies, will ‘leverage Intel software and hardware to implement federated learning in a manner that provides additional privacy protection to both the model and the data.’ The two companies will be joined by 29 healthcare and research institutions from seven countries. “AI shows great promise for the early detection of brain tumours, but it will require more data than any single medical centre holds to reach its full potential,” said Jason Martin, principal engineer at Intel Labs in a statement. Artificial intelligence initiatives in healthcare continue apace. Microsoft recently announced details of a $40 million ‘AI for Health’ project, while last month startup Babylon Health stated its belief that it can appropriately triage patients in 85% of cases. Read the full Intel announcement here. Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
<urn:uuid:f370efcf-826d-4040-86ff-e44b7461cc65>
CC-MAIN-2022-40
https://www.artificialintelligence-news.com/2020/05/11/intel-and-upenn-utilising-federated-learning-to-identify-brain-tumours/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00549.warc.gz
en
0.949175
348
2.671875
3
What are phishing attacks? Phishing is a type of cyberattack where scammers attempt to trick their potential victims into giving out sensitive or personal data by impersonating someone they trust—such as the IRS, your banking service, or even your co-worker. This data can be used for other fraudulent activities, which might result in financial loss or even more phishing incidents targeted towards you or the people you know by using your credentials. This data being targeted can include various type of sensitive information, such as bank account information, login credentials, and credit card information. For a company, a phishing campaign might lead to a bigger loss, such as data breaches, ransomware, or email account takeovers. We’ve included examples of various types of phishing attacks below so you can see the effect of a phishing attack on companies. The best way to prevent phishing is through proper training. Keep your organization safe with security awareness training through Inspired eLearning today. Most Popular Phishing Examples According to the FBI’s 2021 IC3 report, 323,972 victims fell for phishing attacks. While the most common phishing method uses emails, the strategy itself is so effective that cybercriminals developed a slew of methods including vishing and smishing – not to mention the different techniques they’ve developed to better trick victims. While this article can’t cover all of them in detail, we’ll tell you about the most popular ones so you can be aware of them and keep your data safe. Email is a very popular choice for phishers to initiate their scams—in fact, 96% of phishing attacks happen through email. The term “phishing email” usually refers to the method where an email impersonating a legitimate entity is sent to millions of people – literally “fishing” to see if anyone “bites.” These emails are often very generic, yet believable enough to mislead victims. Most email phishing messages alert the recipient with a sense of urgency, such as an email claiming to come from your credit card provider notifying you that your account has been blocked due to suspicious activities so you must follow a link in that email to unblock it. Following the link takes you to a fake web page that looks like your bank’s internet login page, asking you to fill in your credentials so you can log in to the website. Any information you input through it will be sent to the attacker and can be used to gain control of their actual account. These emails are usually easy to pick out. They often use generic greetings (such as “Hi Customer”, “Dear User”, or no greeting at all), are either riddled with spelling or grammatical errors or overly formal, or use the wrong or outdated logo. It’s not difficult to spot them if you’re cautious and tech-savvy enough, yet the number of people falling for them still increases yearly. There are many factors contributing toward these failures including an overall lack of awareness as to how phishing works. Make sure you’re not a victim of a phishing attack. Here are some more ways to spot a fake email. Read this infographic and print it out or share it around the office. In contrast to conventional phishing attempts that send copies of a generic email message to millions, spear phishing is much more specific and targeted. In these campaigns, attackers target an individual or a group of people holding a high degree of authority within an organization, such as managers or executives. To convince their target that this email is the real deal, they’ll use highly personalized details in the body text, like the victim’s address, or even pretend that the email comes from someone within the organization. Unlike normal phishing methods which require zero research by the attacker, spear phishers usually do their homework beforehand: the victim’s social media accounts, their position within the company, who they might work with, even other private data like home address or telephone number which could come from previous phishing attempts. Spear phishing is often used in attacks with high-profile targets, such as CEO fraud, or business email compromise. While most emails still ask you to click a link, its specificity makes these attacks more difficult to spot. An example of a phishing email from BendBroadband. Vishing (Voice Phishing) Vishing refers to phishing scams that, instead of using written emails, use voice mails or utilize Voice over Internet Protocol (VoIP) technology to call targets. Like phishing attempts, attackers are ultimately after the victim’s credentials and money. Phishers may call the target using a spoofed caller ID or phone number, leave a recorded message in the potential victim’s voicemail, or contact them through text and leave a number that the victim can call back. During the scam, attackers will use social engineering techniques—like saying you’ve won a prize or there’s a suspicious activity detected in your account—to catch you off-guard and persuade you to disclose sensitive or personal information. In most cases, scammers will provide the victim’s personal details during the call—like “confirming” the last four digits of their social security number or home address—this is done to establish trust and give them an impression of legitimacy. Once the victim is convinced that this scam is actually the real deal, they will ask for more confidential information like passwords or access codes. Smishing (SMS Phishing) Smishing is really just phishing sent through text messaging on a cell phone, but since it’s not as popular as phishing, people are less aware of its existence. Attackers send text messages claiming to come from websites such as PayPal or delivery services like USPS claiming that there’s a problem with your package and asking you to follow a spoof link. On a desktop, you can hover over it and see where it leads, but this is more challenging on a mobile device. The lack of smishing awareness contributes to its significant increase in 2020, when the COVID-19 pandemic began. In countries that issued economic stimulus bills, cybercriminals took advantage of this crisis by sending text messages pretending to come from a government representative with stimulus money. In fact, the problem was so widespread that the Federal Communication Commission (FCC) issued major press releases warning users about COVID-19 SMS scams. Phishers often attempt to trick their targets by masking a link to a fake website set up by the attacker inside a genuine, trustworthy one. There are different methods to achieve this, one of the most common is by hiding the URL inside a website hyperlink, like this: “For further information, visit Google’s education portal at https://edu.google.com” (Instead of leading to Google’s page, clicking the link will lead you to a different destination). You can check whether a link is legitimate or fake by hovering your cursor over it, but while this is the most common tactic, this is far from the only one employed by phishers. They may even take advantage of your habit of hovering over a link by choosing a lookalike domain name for their website. For example, “inspredelearning.com” or “inspireclelearning.com” may look like “inspiredelearning.com” (if you can’t see it, here’s some hints: one of them lacks an “i”, the other one use lowercase “CL” in place of “d”). Malware is an umbrella term for codes and softwares used for hacking. They include, but are not limited to: viruses, keyloggers, spywares, and ransomwares. Some phishing campaigns rely on the victim to follow a link that redirects to a fake website and then extract their sensitive data from the website interface, others are malware-based and involve downloading malicious software disguised as a genuine file into the target’s device. There are also methods that combine the both types of attacks in which attackers prompt their victims to follow a malicious link and download a software from a fake website. The term “keylogging” comes from the words “keystroke” and “logging”. Like the name suggests, it’s the practice of keeping a record of every keystroke someone made on their keyboard. In other words, it’s a surveillance tool installed to keep an eye on the user—what they type, which websites they visit, the different credentials they use for different websites and their company login info. Some phishing campaigns can involve installing keylogger applications on your device without your knowledge—it could be downloaded automatically when you click on a link or sent as an attachment often in the form of a Microsoft Office document. If a keylogger is installed on a device used by someone holding high authority within an organization, it will be easy for cybercriminals to steal confidential data belonging to the organization. You can prevent mistakenly installing keyloggers by avoiding suspicious links and attachments. A good anti-malware application will also help prevent incidents. In March 2021, an attacker encrypted data belonging to the computer giant Acer and demanded $50 million for the decryption software. This is called a ransomware attack. Unlike phishing, which aims to obtain credentials and sensitive information, ransomwares hold your data “hostage” until you pay ransom money for it. These malicious softwares usually want you to transfer funds through certain online payment methods within a set time limit. They might lock your screen or encrypt your files (like in the example below) to restrict your access, and if you fail to pay within the time limit, they will delete your data. Like keyloggers and other malwares, ransomwares can infect your device after following a suspicious link or downloading a fake attachment. While you can avoid paying the ransom by regularly backing up your data, recently there are cases where instead of deleting your data, cybercriminals will upload your data to an off-site and leak them if you refuse to pay the ransom money. Train your team to be aware of phishing scams with Inspired eLearning As you’ve seen, there are a lot of methods hackers use to phish. Whether they’re looking for information, access, or data, the fact is there have been a lot of cases where an attack caused by a phishing attempt could have been avoided. However, for most of us, it’s draining to be on alert every time we interact with a message. Discovering and identifying scams is a skill you need to train to develop good security habits. There’s no shortcut to it, just training and practice. For this reason, it’s recommended that you make regular security awareness training mandatory for all employees. Security awareness training educates users on current cybersecurity trends, so they’re more likely to identify an attack when they see one. In addition to just training, you can also add a phishing simulation to make sure that employees are more careful with emails and can practice their newly-learned skilsl without the risk of an actual phishing attack. Get a free trial of Inspired eLearning’s comprehensive training packages here and see how you can level up your employees’ security awareness training today.
<urn:uuid:440d227f-6de5-4d96-b061-db7cc27f47c2>
CC-MAIN-2022-40
https://inspiredelearning.com/blog/phishing-examples-to-protect-your-organization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00549.warc.gz
en
0.939342
2,419
3.390625
3
The Covid-19 pandemic has exacerbated the pre-existing global shortage of semiconductor chips, which power everything from smartphones to vehicles. Geopolitical issues between the US and China have impeded international trade and driven the need for self-sufficiency, which China is pursuing. China has been manufacturing integrated circuits domestically for many years, but it is now shifting its focus from fast production to high-quality end products. The partially state-owned firm SMIC (Semiconductor Manufacturing International Corporation) is already capable of producing both 14 nanometre (nm) and 28nm chips. In 2019, China’s semiconductor manufacturing industry had an output value in excess of CNY750 billion, with this figure projected to be CNY884.8 billion in 2020. This swift progress has been made possible as China’s rate of domestic chip development has exceeded expectations, with SMIC achieving a breakthrough in 5G mmWave chips as well as its successful tape out of the N+1 process. These technological improvements together with industrial policies have underscored China’s efforts to produce more chips domestically, with data from Analysys Mason showing that during 2020 Chinese semiconductor production rose by 16.2% - a sharp increase from 2019’s 7.2%. This increase has been enabled by a number of factors, not least of which is the rapid development of China’s integrated circuit market. Over the past few years, this has driven progress in terms of both technology and industrial policy, closing the gap between China’s manufacturing processes and those of leading global semiconductor manufacturers. This will allow China’s development of integrated circuits to surpass other markets’. A major factor here is that success in semiconductor development is self-perpetuating – the completion of new wafer fabs in mainland China has brought down the cost of domestic circuit production and thereby allowed it to expand, while the geographical advantages of local production have enabled development to flourish. The Chinese government has created a number of policies to support domestic production of integrated circuits - with a particular focus on creating higher quality products – acknowledging that the industry is a crucial part of national economic and social development. At the heart of this are 28nm chips – seen as the bridge between the lower and higher ends of integrated circuit manufacturing capacity. Apart from CPU, GPU and AI chips that require relatively high power consumption, the majority of industrial-grade chipsets use 28nm or higher – they have become mainstream compared to 5nm or 7nm chips, and are used in a wide range of products including TVs, air-conditioners, automobiles, high-speed rail, industrial robots, elevators, medical equipment, smart bracelets and drones. “The demand for China to move towards the mid-to-high end is pressing, one we have fully mastered the 28nm technology, we will be no longer stuck with most of the chip demand in the market,” according to Teng Ran, Deputy General Manager of CCIC Research Centre. “The production capacity of chip manufacturers in China on 28nm chips has been fully utilised, reaching at least 98%. This is the year with the highest capacity utilization rate in the past five years.” Industry observers expect China to become self-sufficient in 28nm manufacture this year, paving the way for increased production of 14nm chips. SMIC was the first Chinese firm to produce this standard and mass produce the chips, joining the ranks of international firms such as Intel, TSMC, Samsung and UMC. The 14nm standard is used in around 65% of chips, and is expected to become the main technology used in mid-to-high-end semiconductors, with SMIC noting that the technology has significant potential in many fields – including high-end consumer electronics, high-speed computing, low-end AP and baseband, AI, and automotive applications. The 14nm chips that SMIC has produced can be used in the fields of 5G communications and high-performance computing. The firm has said that it expects to begin mass-producing the chips at scale from next year. While China has made great strides towards self-sufficiency in chip manufacture, the industry must be globalised – the country’s independent research and development is aimed at better integration into the global market, rather than striking out separately and creating a new market. The chipset industry must be both capital-intensive and technology-intensive, but must also act as a collaborative system for global innovation. “Our chip foundation is relatively weak. This is a fact that we must admit. However, against this backdrop, how to do it [increase production of high end chips] and meet the needs [of Chinese companies] can truly be a place that embodies the Chinese entrepreneurial spirit”, concludes Teng Ran.
<urn:uuid:d6988e88-fe7c-4066-90a2-cf9ca50368e6>
CC-MAIN-2022-40
https://zpzccvl.developingtelecoms.com/telecom-business/partner-spotlight/11188-china-accelerates-production-of-28nm-and-14nm-semiconductors.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00549.warc.gz
en
0.952518
990
3
3
Artificial intelligence may have been hyped – but when it comes to medicine, it already has a proven track record. So can machine learning rise to this challenge of finding a cure for this terrible disease? There is no shortage of companies trying to solve the dilemma. It feels as if a superhuman effort is needed to help ease the global pandemic killing so many. Artificial intelligence may have been hyped – but when it comes to medicine, it already has a proven track record. So can machine learning rise to this challenge of finding a cure for this terrible disease? There is no shortage of companies trying to solve the dilemma. Oxford-based Exscientia, the first to put an AI-discovered drug into human trial, is trawling through 15,000 drugs held by the Scripps research institute, in California. And Healx, a Cambridge company set up by Viagra co-inventor Dr David Brown, has repurposed its AI system developed to find drugs for rare diseases. The system is divided into three parts that: - trawl through all the current literature relating to the disease - study the DNA and structure of the virus - consider the suitability of various drugs Drug discovery has traditionally been slow. “I have been doing this for 45 years and I have got three drugs to market,” Dr Brown told BBC News. But AI is proving much faster. “It has taken several weeks to gather all the data we need and we have even got new information in the last few days, so we are now at a critical mass,” Dr Brown said. “The algorithms ran over Easter and we will have output for the three methods in the next seven days.” Healx hopes to turn that information into a list of drug candidates by May and is already in talks with labs to take those predictions into clinical trials. For those working in the field of AI drug discovery, there are two options when it comes to coronavirus: - find an entirely new drug but wait a couple of years for it to be approved as safe for use - repurpose existing drugs But, Dr Brown said, it was extremely unlikely one single drug would be the answer. And for Healx, that means detailed analysis of the eight million possible pairs and 10.5 billion triple-drug combinations stemming from the 4,000 approved drugs on the market. Prof Ara Darzi, director of the Institute of Global Health Innovation, at Imperial College, told BBC News: “AI remains one of our strongest paths to achieve a perceptible solution but there is a fundamental need for high quality, large and clean data sets. “To date, much of this information has been siloed in individual companies such as big pharma or lost in the intellectual property and old lab space within universities. “Now more than ever there, is a need to unify these disparate drug discovery data sources to allow AI researchers to apply their novel machine-learning techniques to generate new treatments for Covid-19 as soon as possible.” In the US, a partnership between Northeastern University’s Barabasi Labs, Harvard Medical School, Stanford Network Science Institute and biotech start-up Schipher Medicine is also on the search for drugs that can quickly be repurposed as Covid-19 treatments. Read more: www.bbc.com
<urn:uuid:fa579d0c-a984-494d-968c-c21e7767cab7>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/04/19/coronavirus-ai-steps-up-in-battle-against-covid-19/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00549.warc.gz
en
0.958243
709
2.65625
3
Teach your developers about access control vulnerabilities Data extraction techniques are evolving day by day, so it’s not difficult to see why the significance of securing data is also growing rapidly. But what does access control mean, why is it important, and how can we use it to protect our data? Read on to find out! What is Access Control? Access control is a comprehensive group of methods that tries to guarantee security and prevent the unauthorized usage of resources, computers, data, and computer networks. It is a fundamental component of data security, and it’s crucial to plan it at the very beginning of the project development lifecycle. It tries to answer questions such as: - Who should access a company’s data? - What kind of users should access specific data? - What kind of access roles and privileges do we want in our application? - How can we make sure users are granted access to the right resources? - Under what circumstances do we deny access to a user with access privileges? The main components of Access Control As Daniel Crowley, head of research for IBM’s X-Force Red outlines, access control consists of two main components: 1. Authentication provides and validates identity. It’s the process of verifying that an individual, entity, or website is who it claims to be. Authentication in the context of web applications is commonly performed by submitting a username or an ID, and one or more items of private information that only a given user should know. 2. Authorization defines access rights and privileges to resources. This process determines whether a request to access a particular resource should be granted or not. Broken access control was the fifth most common vulnerability type in the OWASP Top 10 2017 list, and number one in OWASP Top 10 2021. Since this can be a massive security issue, prevention is crucial. There are several ways to detect access control flaws, source code analysis and vulnerability scanning being among them. We’ve gathered some of the subcategories of broken access control below, but there are many more. To dive deeper into what these are and how to prevent them, check out our blog post. Direct object references One of the most common access control related vulnerabilities is IDOR (Insecure Direct Object Reference), which means someone can access sensitive data they shouldn’t be able to see by referring to it directly. As a result, attackers can bypass authorization and access resources without the necessary permissions. Mass assignment refers to assigning values to multiple attributes all at once. If these values are coming from the user, the list of attributes should be validated properly, otherwise the attacker could set values of private properties as well. A directory traversal attack can happen due to improper filtering and validation of the user input. It refers to a security misconfiguration or vulnerability that can be exploited by a malicious user in a web application by appending the well-known dot-dot-slash (../) or other similar strings to file paths sent by the application to traverse up the server’s directories and access private system files. Supply chain attack To minimize risks coming from third-party software or software components, and to secure organizational data accessed by other companies, designing and cultivating adequate risk management related to the supply chain is essential. This involves both physical security and security for software, processes, and services. Get started with secure coding training On the Avatao platform we make it easy for you to find and assign exercises, and to track your developers’ progress on our interactive access control training modules. Reach out to our team today: Copyright © 2022 Avatao
<urn:uuid:f0011eca-d0f5-42b4-8a95-be0c58b711e4>
CC-MAIN-2022-40
https://avatao.com/access-control/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00549.warc.gz
en
0.902323
768
3.75
4
Virtual Reality hit the mainstream recently with the arrival of affordable, consumer-friendly headsets. Already businesses are keenly embracing the opportunities created by this amazing technology. That isn’t surprising, given the possibilities it creates for freeing our minds from the physical shackles of our body and allowing us to “see” into places that only exist in the digital world. In the digital world, the rules are different – objects can be conjured into being by simply describing them. Travelling between destinations takes place in the blink of an eye. And any damage that you do can be undone with the press of a button. All this makes virtual reality – and its sister tech augmented reality, which I will be covering in a separate post soon – a powerful tool for business. So here is an overview of some fascinating ways it is already being used and some glimpses of what could be achieved in the future. VR can impact every field of business Much of the hype around the release of mainstream VR headsets last year focused on their potential for enhancing entertainment experiences. Uptake of VR in business, however, is forecast to outpace leisure use of the technology in coming years, with spending reaching 9.2bn by 2021 according to research fromTractica. Just about any process that can be carried out in the physical world – and in business that would range from customer services to marketing, finance, HR and production – can be simulated in VR. In general, tasks that it can carry out can be split into one of two categories– training, or practical application. For training purposes, VR offers the potential to immerse ourselves in any situation that can be simulated on a computer. Increasingly photorealistic visuals “trick” our brain into believing, to varying extents, that what we are seeing is real, allowing us to monitor, and learn from, our interactions. A great example is the public speaking training systems which have been devised using the tech, such as Oculus’sVirtualSpeech. As for practical applications, they are virtually unlimited – key factors here are the potential for enabling humans to carry out tasks without being present (telepresence) and the possibilities for modeling and interacting with simulations of real-world objects that wouldn’t be feasible in real life. Prototyping and design In manufacturing and production-driven businesses, VR allows every characteristic of a part, process or mechanism to be simulated and tested. Performance or reliability can be tested and examined under any condition, far more cost-effectively, quickly and safely. Of course, there are usually up-front platforming and tooling costs. But increasingly these are likely to be mitigated by the arrival of VR-as-a-service (more on that in a bit.) Millions can be saved by eliminating the need to build full-scale working prototypes, by carrying out initial exploration of ideas in VR. Today this is used in aircraft design, with Boeing and Airbus both extensively using simulated digital spaces to design and test new features and models. Architects have already been converted to the technology, as it allows them to present finished concepts to clients, and allow them to freely explore their designs before a single stone has been set in place. VR and your customers VR offers every business the chance to rethink how they present to and engage with, their customers. As both marketing and customer service tools, VR opens new possibilities for showcasing products and services. Further down the line, it is likely to become a uniquely useful source of information on customer behaviour. This is because when someone is engaging with you in a virtual, digital world, a huge amount of data becomes available on how they act, react and interact. Rather than visit a physical showroom, customers leading increasingly digital lives will simply put on a headset and appear in a virtual one. Once there, they can interact with sales assistants – which could be virtual representations of real humans, or, more likely as time progresses, AI constructs operating independently of direct human control. If a customer wants to try out your new car, furniture or kitchen utensil, VR will let them do it without leaving their homes. Of course, real-world showrooms are likely to remain a part of the marketing landscape for some time, as for many products there will be a point where consumers want to see and feel the physical product. But for early-stage market research and quickly getting an overview of a brand’s product range, VR will increasingly offer a convenient alternative. Swedish furniture giant Ikea already offers virtual showrooms and many more retailers are likely to follow. Training in virtual worlds The most apparent advantage of training in VR is that when things go wrong you simply have to hit the reset switch. This already has applications in healthcare, where surgeons are using it to train in making life-or-death choices while carrying out complex operations on children. This simulation goes as far as scanning and creating 3D representations of the real nurses the trainees will work with so they will see familiar faces when they get into the actual-reality operating theatre. Other medical uses allow doctors and surgeons to try out new tools and procedures in a safe simulated environment. Equipment manufacturers also gain invaluable feedback thanks to the close monitoring that VR enables. Pilots have relied on sophisticated simulators for decades. But the million-dollar-plus, room-sized simulation suites are starting to be replaced by more cost-efficient and portable VR solutions. Improved access to state-of-the-art simulators means that, on earning their wings, pilots will take to the skies with far more simulated hours of flying time under their belts. Teachers now have the opportunity to test their mettle against a class of unruly kids in a virtual classroom. As well as rehearsing their teaching methods they train to look out for disruptive behaviour such as students using their phones during lessons. Law enforcement officers in New Jersey, US are using a system which allows them to train for scenarios ranging from routine traffic stops to being shot at. The company behind this solution has gone as far as incorporating technology to deliver an electric shock to trainees if they make a dangerous mistake – with the aim of simulating fear which officers would naturally feel in the field. Although apparently this feature has not been activated in routine training yet. Although compared to traditional, room-based simulations, VR represents a dramatic cut in costs, there can still be significant up-front expenses. This is particularly true if your training needs require bespoke simulations to be coded, and environments designed from scratch. To meet this need we are starting to see the emergence of businesses providing ready-made services, from hireable VR suites to world-building tools. Marketing agencies which have prepared themselves to create virtual and interactive experiences for companies and brands also fall into this category and are likely to play an increasingly prominent role in the marketing landscape of the near future. The emergence of services like these are already playing an enabling role in the development and deployment of VR across industry and leisure, with VR dating agencies, therapeutic services and entertainment offerings all looking to shake up their respective sectors. VR technology is certain to continue to improve, bringing our experiences in virtual worlds more closely into alignment with those in the real one. Recent breakthroughs which could have a widespread impact include the emergence of eyeball-tracking technology, allowing us to interact and activate aspects of a simulation merely by looking at them. Further ahead, experiments are already being done with interfacing brainwave activity, potentially allowing us to alter our environment merely by thinking. Other advances are likely to mitigate against some of VR’s current limiting factors, such as the fact current applications can sometimes feel like somewhat solitary experiences. Current high-end VR devices still generally require expensive, dedicated computers to power them, but this is likely to change as standalone headsets become more capable. Thanks to all of this, it’s likely that increasingly large amounts of our business lives will be conducted in virtual reality as time goes on.
<urn:uuid:4c89b281-8da1-45a1-9ec2-78843dce07d9>
CC-MAIN-2022-40
https://bernardmarr.com/the-amazing-ways-companies-use-virtual-reality-for-business-success/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00549.warc.gz
en
0.955679
1,650
2.546875
3
American poet Ralph Waldo Emerson once said, “Every artist was first an amateur.” He likely never thought those words would apply to machines. Yet artificial intelligence has demonstrated a growing aptitude for creativity, whether writing a heavy-metal rock album or producing an original portrait that is strikingly reminiscent of a Rembrandt. copyright by www.scientificamerican.com Applying AI to the art world might seem unnecessarily derivative; there are, of course, plenty of humans delivering awe-inspiring work. Proponents say, however, the real beauty of training AI to be creative does not lie in the end product—but rather in the technology’s potential to expand on its own machine-learning education, and to solve problems by thinking outside the box far faster and better than humans can. For example, creative problem-solving AI could someday make snap decisions that save the lives of the passengers in a self-driving car if its sensors fail, or propose unconventional combinations of chemical compounds that lead to new drugs for previously untreatable diseases. AI with a creative streak will be essential in developing highly automated systems that can respond appropriately to human life, says Mark Riedl, an associate professor at Georgia Institute of Technology’s School of Interactive Computing. “The fact is, we do lots of little bits of creativity every single day; lots of problem-solving goes on,” Riedl says. “If my son gets a toy stuck under the couch, I have to devise a tool out of a hanger [to retrieve it].” Riedl points out human creativity is also important in human social interactions, even telling a well-timed joke or recognizing a pun. Computers struggle with such subtleties. An incomplete understanding of how humans construct metaphors, for example, was all it took for an experiment in AI-generated literature to compose a new Harry Potter chapter filled with nonsensical sentences such as, “The floor of the castle seemed like a large pile of magic.” Still, getting machines to accurately mimic human style—whether Rembrandt’s or J. K. Rowling’s—is perhaps a good place to start when developing creative AI, Riedl says. After all, human creators often start off imitating the skills and processes of accomplished artists. The next step, for both people and machines, is to use those skills as part of a strategy to create something original. […]
<urn:uuid:70b298e6-d064-4672-8672-e437e25836bf>
CC-MAIN-2022-40
https://swisscognitive.ch/2018/01/28/for-ai-to-get-creative-it-must-learn-the-rules-then-how-to-break-em/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00549.warc.gz
en
0.941711
507
3.0625
3
Back in the days of Eliza, Alice and Jabberwocky, the first chatbots developed in the 90s, capability was still rudimentary. When confronted with the complexities of human communication, they got very easily confused. Ultimately they were flowcharts and their responses resulted from relatively rigid if/then scripts. If asked what is your name, then answer Alice. Fast forward thirty years and research in AI has given rise to various conversational interfaces in machine learning and natural language processing. As a result over the last few years, there has been an exponential growth of models to detect patterns in human language and determine intent, especially when what is said doesn’t quite match what is meant –in other words, what we have termed ‘Contextual AI’. - AI in comms? It's all about the context... (opens in new tab) The rise of contextual AI One model that is changing the way that bots communicate is WordVec2, an algorithm designed by Google using a neural net structure to learn word associations for large bodies of text to derive sentiment analysis and natural entity recognition from word similarity. The way that this is done is through exploiting a linguistic concept called sentiment proximity – simply put, the concept that similar words occur together more frequently than dissimilar words. As the name implies, Word2Vec represents each distinct word with a particular list of numbers called a vector. The model defines the dictionary using a vector space of these words in 300 dimensions where the similarity in direction of two vectors holds information as to the similarity in sentiment of two vector words. This is done through a process of looping through the sample text it is trying to read, fitting a model based on neighboring words of a pre-defined number either side. To do that the neural net is used, saving the weights from the first layer of training. Words do not need to be next to each other to be detected as similar after a large enough training time - if generally they are surrounded by similar words it can be assumed linguistically that they have similar meaning. The model create links between the surrounding and target words in a body of text using skip-gram and CBOW (continuous bag of words) models of processing. These methods use neural nets of high weighting to distil and train semantic information about the language in a text by training based off of their relationship to surrounding words. They do this by iteratively trying to predict a target word from the words around it, and by trying to predict surrounding words from a target word respectively. This semantic information is then stored in the first weighting layer of the neural nets used in CBOW and skip-gram called the embedding layer. This can be then multiplied by a pre found representation of a word to extract Word vectors for any of the input words in the training text. This creates the vectors to be mapped into the space described above. Word2Vec’s dual architecture method is very effective. The use of the CBOW method allows for faster processing of confidence values and has better representation for more common words in a text body, while the skip-gram method works well with smaller datasets and allows for strong representational values for rarer words in a text body. It can also create mapping of the word vectors in as many dimensions as comparison would entail. To simplify mathematically, principle component analysis maps the vector space into a graphing system of the coder’s choice where each dimension or axis are picked to represent the most useful data (the data with the largest variance on the data value plotted) where the “principle” components are chosen and the other dimensions of the vector are ignored for clarity. This can allow for interesting sentimental mapping of a datasets. It can provide a nuanced description of proximity, and by extension similarity of vocabulary that more rudimentary methods of NLP such as entity/intent based recognition may miss. - Why businesses must invest in AI augmentation (opens in new tab) So what are the implications? It can be seen that a weighted value of proximity, either through direct comparison or weight mapping can be derived to show the sentimental similarity between language choices in a body of text. This can allow for a more nuanced form of language processing which can be seen for a given question, for example: “Who is the Michael Jordan of golf?” This as a given input for an entity intent method of NLP would probably stop at detecting the entity of a Basketball player and the subject of golf before coming undone, however with the machine learning based model of NLP the vector mapping could be used to calculate the Euclidean distance between the word vector ‘Michael Jordan’ and its highest entity pair (basketball) and use that to iteratively find a node of similar minimized distance from gold, say, ‘Tiger Woods’. The above example shows a new layer of complexity that can be added to questioning of chatbot engines. To demonstrate a more enterprise example we’ll use a question relevant to a medical aid company. The question could be posed to the integrating SMS bot by WhatsApp or text: “Find me the best medical center near me for prescriptions” The use of the phrase ‘best’ would normally be problematic under less rigorous models, but with pertinent training, the vector mapping for the word vector ‘best’ in a medical context could return information such as shortest waiting time, lowest logged error in medication rate and lowest price as stored within the word vector which can then be fed through the NLP engine to return the ‘best’ centers based off each of the parameters described above. Hopefully the differential in processing avenues speaks for itself, but additionally this machine learning model of NLP would also potentially increase the accuracy of intents and entities for more simple questions. For example, in cases where a specific term such as specific medical language or a niche term cannot be derived in the architecture into an intent or entity, the word vector form could be used to find the closest synonym for the word that is under the accepted key word weightings that the engine accepts, decreasing the likelihood that bots would not be able to understand a given question. This ability to enhance the complexity of chatbots and contextualize them has a number of implications. It means that a chatbot can do more than just hold conversations with customers. Contextual AI upskills chatbots to include multiple functions from customer service through to mental health and wellness monitoring. It is becoming increasingly clear that platforms are becoming so smart that they can be used to automate processes that you didn’t even know could be automated and ones that you don’t even know you need yet! - AI and native: The future of online advertising (opens in new tab) Charlie Masters, head of research, Vroomf (opens in new tab)
<urn:uuid:b55f3a2d-f963-4120-aa23-ac1cc3e95b16>
CC-MAIN-2022-40
https://www.itproportal.com/features/how-chatbots-can-answer-more-complex-questions-through-contextual-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00549.warc.gz
en
0.947376
1,408
3.5
4
Back to Applications Quantum Sensing Solutions for Quantum Physics Quantum physics is a theory that has been established during the first half of the 20th century. It describes nature and behavior of matter and energy at particle scales (e.g. photons). This theory had predicted phenomenon that were in contradiction with classical physics, such as entanglement or quantum superposition between particles. From the second half of the 20th century, experimental demonstrations have verified many of these predictions. While there is still ongoing research in quantum physics, quantum technology has emerged and found application in different domains such as cryptography, metrology or computing. ID Quantique’s range of quantum sensing solutions for quantum physics have a wide-range of applications; from photon correlation and quantum communication to quantum computing and nano photonics. Photon correlation is a technique applied across a variety of fields; such as spectroscopy, quantum communication and metrology (LiDAR, OTDR, range finding). It provides particle sizing, quantum key distribution, or signal to noise ratio enhancement respectively. ID Quantique’s precision timing products offer a convenient way to measure and correlate the arrival time of photons with a very high temporal resolution. Application Note While conventional methods of securing communication have shown vulnerability, especially against quantum attacks, quantum key distribution has been demonstrated to be the only solution to providing secure data transfer. ID Quantique’s Quantum Key Distribution (QKD) systems are based on quantum optics, where the use of low noise single-photon detectors combined with high-speed timing electronics is essential. Contact us Quantum computing research has been conducted since the 1980s and recent breakthroughs have seen significant improvements in computation speeds. Although fault-tolerant quantum computers need further improvement, their application in cryptography, quantum research and simulation, solving linear equations and more is expected to significantly improve our scientific knowledge. Quantum computing requires quantum bits (qubits), but also qubits measurement. Measuring a quantum bit could be done in many ways, but most of the time it involves efficient and low noise single-photon detections and photon correlation technique. ID Quantique’s quantum sensing products fulfil these requirements. Contact us Integrated Photonics and Nano Photonics Dedicated, integrated optical circuits allow you to perform quantum communication and quantum computing operations in a very compact way. They are expected to enable a major breakthrough in the implementation of quantum technologies into everyday life. ID Quantique single photon detectors can be implemented in a compact solution as OEM components. Contact us
<urn:uuid:bf661fd9-e79c-40c2-ae26-b698cc3bcfc1>
CC-MAIN-2022-40
https://www.idquantique.com/quantum-sensing/applications/quantum-physics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00749.warc.gz
en
0.916419
537
2.8125
3
With the recent advances in technology, it’s hard to know where to put your attention. For example, 5G hasn’t taken off as fast as people would have hoped, but the possibility of combining it with artificial intelligence (AI) may lead to considerable innovations in the next few years. A decade from now, the combination of AI and 5G networks will have revolutionized how business gets done in our everyday lives. Consumers will interact with companies through their personal AI assistants and 5G-enabled devices, physical and virtual, and demand information quickly and efficiently. They’ll receive this requested information almost instantaneously due to the vast bandwidth provided by 5G. This high-speed data connection will open up new opportunities. What is 5G? 5G is the fifth-generation mobile network. It is a set of standards for telecommunications and wireless communication protocols. In addition, it can provide higher speed, ultra-low latency, more comprehensive coverage, and more capacity than previous network generations. What is Artificial Intelligence? Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. It’s a broad term referring to computer systems that mimic human thought processes. The cognitive processes replicated by these computer programs include learning, reasoning, and self-correction. Also read: Labor Shortage: Is AI the Silver Bullet? Potential 5G and AI Uses While it’s still early, there are already a few applications for combining 5G and AI technologies. 5G-enabled autonomous vehicles Having connected cars on a single network would help eliminate the issue with dead zones. If your phone drops a call when you drive under an overpass or through specific tunnels, imagine how much worse it would be if you were driving an autonomous vehicle. The combination of fast network speeds with onboard sensors could enable self-driving cars to communicate with each other in real time about traffic conditions, potholes, accidents, or other road hazards. Additionally, cities and transportation agencies could use that data to improve infrastructure and optimize traffic flow—for example, by identifying areas where adding new lanes or rerouting traffic might make sense. AI-driven tools for service operations AI-driven technologies help network engineers automate and optimize network activities and business continuity planning, from reporting issues to reacting to events and incidents. For example, mobile networks and AI are merging in a new form of automation called AIOps. This approach is already being used by telecommunication companies to empower software tools to act quickly and respond immediately in the event of any operational events or incidents, security issues, or both, all without the need for human intervention. Virtual reality (VR) and augmented reality (AR) Both VR and AR rely on high-speed networks to deliver realistic images and sounds. With better connections, we’ll see higher resolution graphics and faster response times, which will lead to better experiences overall. For example, a low latency connection won’t matter if your VR headset lags behind your head movements because it won’t take as long for image updates to reach your eyes. However, some industry experts believe 5G’s ultra-low latency may be critical to making VR and AR mainstream. Also read: How Will 5G Change Augmented Reality? Analyzing logs of data with AI There will be a massive increase in the amount of data generated by IoT (Internet of Things) devices, servers, apps, network controllers, and other equipment due to the deployment of the 5G network. Unfortunately, there is little accessibility with conventional methods used to collect data in logs. However, it is now possible for network management systems to be automated to analyze data, get results, and extract insights to improve network performance regularly, thereby decreasing downtime. Utilities and energy We’ve already seen a lot of interest in 5G-connected home appliances, including refrigerators and washing machines. Imagine a smart refrigerator that lets you know when your milk or eggs are going bad, so you don’t waste food. Add AI to that mix, and suddenly your fridge will be able to order replacement items. Likewise, that same AI could tell your washer/dryer combo to run only after electricity rates drop to off-peak levels, potentially saving money on utility bills. How Does 5G Help AI? Advances in network technology like 5G could lead to greater speed and increased power efficiency for connected devices, which is crucial for developing self-learning systems. As more and more devices connect to autonomous networks, more data will be created. The speed at which we can transfer data from one device to another has been a significant factor in how machine learning (ML) algorithms have evolved, helping them learn faster. These advancements might even help us progress on some of AI’s biggest challenges, such as making it easier for machines to understand natural language and creating systems that can identify objects without being fed information by humans independently. Here are three ways 5G could improve our future with AI: Networking speeds determine how quickly computers can communicate with each other. This affects everything from latency times to processing speeds and energy consumption. In an age where connected devices are becoming increasingly common, these factors matter more. Today, data transfer speeds over 4G networks average around 100 Mbps, while 5G promises up to 10 Gbps—an improvement of about 100 times faster. For AI, faster communication between devices means faster data transfer between processors, which translates into better responsiveness and higher levels of interactivity. Additionally, faster response times allow for quicker feedback loops during training, meaning ML models can adapt to real-time changes rather than wait until their next scheduled session. It also makes it possible for machines to respond much more quickly if something goes wrong. Reduced power consumption Today’s mobile devices typically use two different kinds of wireless connectivity: cellular and Wi-Fi. Cellular connections are usually high-speed, but they consume more power because your phone needs to connect directly to a cell tower. On the other hand, Wi-Fi consumes less power because you can connect wirelessly to any available router, but its connection speeds tend to be slower. 5G networks promise lower latency times and longer battery life. One way this works is through beamforming, which allows 5G devices to transmit signals directly toward receivers rather than broadcasting them out in all directions. This reduces power consumption, allowing devices to be more efficient and get more out of a single charge. As 5G networks become more widespread, cybersecurity will become a bigger concern for consumers and companies. A recent report from Cybersecurity Ventures predicts that cyber crime will cost the world $10.5 trillion annually by 2025, so it’s no surprise that companies are starting to invest more in security. 5G networks will offer several benefits for cybersecurity, including faster data transfer speeds and improved encryption. For example, with 5G, it will be easier to transfer data from one connected device to another, making it faster and more secure for companies to share data between their employees. Likewise, 5G networks include an additional layer of encryption that protects data from hackers. AI and 5G are Enhancing Each Other’s Capabilities Many envision a future where AI services work in conjunction with 5G networks, ensuring enhanced network speed doesn’t get bogged down by traffic. As companies become more reliant on cloud-based apps, they won’t have to worry about latency or service hiccups. AI can analyze data gathered from 5G networks, providing valuable insights for businesses looking to improve their offerings. These two technologies are inextricably linked. Applying AI to both 5G networks and devices will increase efficiency and productivity across industries. Millions of devices rely on speedy connections to receive information in today’s connected world. But 5G isn’t just speed—volume is about volume. The IoT devices worldwide are projected to amount to 30.9 billion units by 2025. Traditional network speeds won’t be able to handle them. That’s where artificial intelligence comes in. Thanks to AI, networks can learn how best to deliver data to individual users based on their unique preferences and needs. So, while 5G provides a fast lane for massive amounts of data, artificial intelligence helps ensure every single piece of data gets where it needs to go as quickly as possible. It’s an ideal pairing; by working together, these two technologies deliver better experiences for enterprises and consumers alike. The Future Convergence of AI and 5G As we think about how AI converges with other disruptive technologies, such as big data, cloud computing, blockchain, robotics, and IoT, converged systems have a distinct advantage over isolated systems. The convergence of these two disruptive technologies can help businesses optimize their operations by making better decisions faster than ever before possible. These trends are already beginning to impact our daily lives through applications such as digital assistants, self-driving cars, and smart cities. Combining artificial intelligence and 5G has many benefits in enterprise scenarios, including improving real-time analytics using ML techniques that enhance cybersecurity monitoring and protection, decision support for real-time actions and initiatives, predictive maintenance, and reducing network latency in business-critical applications.
<urn:uuid:049ec010-9834-4db5-9bbe-8c691be100c6>
CC-MAIN-2022-40
https://www.itbusinessedge.com/it-management/5g-and-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00749.warc.gz
en
0.941452
1,925
3.515625
4
Last year, ransomware attacks cost businesses $1.5 billion, and one widespread attack, WannaCry, is on track to generate over $4 billion in lifetime damages. There’s no mistaking it: Ransomware is a big business. And, like all businesses seeking longevity, it is evolving. Introducing the Priceline of ransomware — a new attack that allows victims to name their own price. The basics of ransomware attacks Ransomware is a form of malware that works by taking your computer hostage, usually by locking you out of your computer or encrypting all of your files. The attacks can reach your machine in a variety of ways like phishing scams, malicious links, fake sites, etc. Regardless of how your device becomes infected, the outcome is the same — you are forced into paying a ransom to regain access. It’s also interesting to note that the ransomware market is largely built on the “bad guys” — or the ones enacting the attack — borrowing ransomware from its creator and remitting 20 percent of the set ransom back to them. The cost of ransomware attacks Ransoms can vary greatly in price. On average, users were forced to pay a little over $1,000 to regain control of their computers in 2016 — a nearly 300 percent increase from the previous year. The cost to small and medium-sized companies can be much more significant. Those that experienced ransomware attacks in 2016 lost around $100,000. Twenty-two percent of affected businesses with fewer than 1,000 employees had to halt operations immediately, and one in six companies reported that the attacks delayed business operations by 25 hours or more. A new form of ransomware Not all ransomware attacks follow the same pricing model. Scarab, a relatively new form of ransomware — one that’s even equipped with “Game of Thrones” references — is pioneering a new approach. Instead of demanding a set price, a victim’s device is directed to a website where users can begin negotiations. Aaron Higbee, co-founder and CTO of PhishMe, tells ZDNet that Bitcoin, which has increased in value from $1,000 to $16,000 in just one year, may be partially responsible for this shift in tactics: “The negotiation process encouraged by the Scarab ransomware is particularly interesting. While entering into negotiations definitely makes it more likely that a ransom of some kind will be paid, it also allows them to fluctuate demands depending on the value of bitcoin at that time.” Ways to prevent ransomware attacks The best way to protect yourself from ransomware attacks is through prevention. Here’s a list of actionable items you can take to ensure you’re properly protected. Educate yourself about new ransomware attacks Ransomware attacks are constantly evolving, and it’s imperative you keep up with the latest trends and tactics. In addition to following our blog, you can also follow other reputable sites like Security Intelligence, ZDNet, and the Center for Internet Security®. Avoid phishy emails Phishing is the biggest source of malware attacks like ransomware, so be sure to stay clear of any strange email. Here are ways you can identify phishing emails: - Misspellings and grammatical errors throughout - No contact details in the signature line - The offer seems too good to be true - The salutation is oddly worded or contains vague terms like “customer” - When you hover over a link, it reveals a different URL than stated - Something just feels off Make sure that if you receive an email that takes you to a webpage that you thoroughly check the url, footers, and design for any inconsistencies. Many phishing sites have only gotten better, so for extra safety, try going directly to your provider’s main page instead of clicking through the email. Verify a site’s security before proceeding Never proceed to a site your browser warns you may be dangerous. Also, stick to sites that are “https.” Keep your software updated Make sure your software is always up to date. This means ALL of your software, even programs like spreadsheets and word processors, need to run the latest versions. If you’re alerted to a new patch or version of your software, install them quickly. This helps ensure hackers aren’t exploiting vulnerabilities in your outdated version. Use anti-virus software Use a credible anti-virus program; a free service just isn’t going to cut it. You’ll need a program that runs automatic updates and conducts ongoing scans for vulnerabilities. To compare leading anti-virus software and their features, visit PC Magazine’s collection of top anti-virus software for PC or Mac. These tips provide a great starting place for staying safe, but ransomware isn’t the only threat you might encounter. To learn more about malware and how you can protect yourself from phishing attacks, check out our complimentary ebook, Phishing for Dollars: How Identity Theft Is Leaving Businesses and Employees on the Hook.
<urn:uuid:19be8c9c-2d7f-463b-94d5-b1f93ca7c5e3>
CC-MAIN-2022-40
https://blog.infoarmor.com/security-professionals/ransomware-new-payment-model-name-own-price
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00749.warc.gz
en
0.925036
1,059
2.625
3
Explore a career in software development What is it like to study software development? Meet the students Andrew studied our full stack development course which gave him a valuable insight into multiple coding languages, including back-end software development. Software development is the process where programming language is used to create software Software development involves the planning, programming and deployment of software applications. You’ll find software on all computers and electronics; it tells the computer how to work, and a developer’s role is to programme the software to do so. A career in software development means learning some of the world’s most popular software programming languages such as Python, Java and C. Software developers often start learning many languages before specialising in just one. Learning to code is like learning another language, and software developers are often already skilled in computer science. A natural software developer exhibits curiosity, possesses clear thinking skills, good communication ability, high reading speeds, close comprehension, pays attention to detail, is quick to learn, is passionate and can be a self learner. Software developers can work for themselves, for an agency or in-house at some of the top tech companies like Netflix or Google. A software developer can earn an average salary of $88,000 Getting certified is the first step towards a career in software development. With online learning, you can study each programming language, and put together your own portfolio to showcase the skills you’re learning.
<urn:uuid:107f282c-40be-4525-9bc2-32fe6a72ac8b>
CC-MAIN-2022-40
https://www.learningpeople.com/au/career/software-developer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00749.warc.gz
en
0.934796
313
2.84375
3
What is MPP? Massively Parallel Processing, or MPP, refers to the use of a large number of processors to perform a set of coordinated computations in parallel or simultaneously. Spread processing across clusters of servers in order to share the workload. Why use MPP? By processing large sets of data simultaneously it greatly increases the speed in which you can get meaningful insights from your data. Rather than one process taking 100 minutes, have 100 processes taking 1 minute each – you’ll have your answer in 1 minute! Did you know? In 2012 computer engineers at the University of Southampton in the UK built a MPP supercomputer from a cluster of 64 Raspberry Pi computers and a rack made of Lego. Latest Massively Parallel Processing Insights Our tech blog is cut out for all data users interested in expanding their understanding on current data analytics topics. Find a new tech piece or practical solution every week. Maximum and consistent performance can be crucial in a business’ life. Explore how it fits into your cloud strategy. Advanced analytical tasks on big data can already be performed directly in the database using data science programming languages, as R, Python, Java, Lua. Read our use case to find out how data science and advanced analytics are revolutionized. Interested in learning more? Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data.
<urn:uuid:0767db34-644b-44c8-80db-b7b4175ff86f>
CC-MAIN-2022-40
https://www.exasol.com/glossary-term/mpp-definition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00149.warc.gz
en
0.898446
309
3.25
3
Overseeing a controlled environment is a prerequisite for a reliable data center. Temperature is one of the most extensively monitored parameters by data center administrators. Fluctuations can cause rapid equipment failures. Ambient temperature monitoring ensures the room temperature threshold within a given period of time. However, the air temperature also varies from rack to rack. Based on Gartner, a rack costs approximately $70,000, excluding the business and operation costs. Can you imagine losing this huge amount? Hence, relying on ambient temperature monitoring solely could risk your business continuity. Rack temperature monitoring should also be part of the overall monitoring scheme. What Should Be The Rack Level Temperature? The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) provides temperature guidelines for rack temperature monitoring. The inlet and outlet temperature standards are specified. It recommends an inlet temperature within the range of 18°- 27°C / 64°- 80°F. The outlet temperature should be less than 20°C or 35°F compared to the inlet temperature. The administrators must adhere to this standard to achieve the utmost reliability. Rack Cooling Index (RCI) The Rack Cooling Index (RCI) measures the effectiveness of the rack cooling method. It is maintained based on the data center thermal temperature guidelines and standards. Calculating it will determine if the temperature of the rack is still in compliance with the ASHRAE. The Index is designed to help assess the equipment room health for managing existing environments or designing new ones. It can also be a design specification for new data centers. Why Is Rack Temperature Monitoring Important? Rack temperature monitoring prevents unwanted incidents from happening. A data center consists of servers and equipment that exhaust hot air. Inadequate cooling and poor airflow can make the inlet temperature rise. On the other hand, low temperature increases humidity. In any of these events, equipment failure will surely happen, affecting the whole business operation. - Quick Identification of Heat Problem - Rack temperature monitoring will help in identifying the heat problem within the facility. It allows the administrators to act promptly. For instance, the air conditioning units will begin to make up as soon as the temperature increases. Ambient temperature sensors will notify if the cooling is already insufficient. By this time, equipment is suffering for a long period already. And as an administrator, risking the business by wasting time is not an option. - Preventing Hotspots - Another importance is identifying the potential hotspots. These happen when the equipment has a higher air intake, causing the temperature at the server rack to rise. Hotspots are a combination of airflow deficiencies and rack layouts. Temperature monitoring ensures proper airflow within the server rack, which could be up to 47U (2m) in height. - Maximizing Equipment Life Span - Heat rises, so equipment at the top of the rack is at risk. As it is known, heat can shorten the lifespan of the equipment. It can primarily affect the battery and CPU. For every 1-degree excess on the required temperature, the life of a lead-acid battery is lessened in half. High temperatures can also cause the CPU to melt down. - Prevent Mold Growth - The combination of high humidity and low temperature is a favorable condition for mold growth. Molds can grow on the racks and servers’ motherboards. Molds are small and fast-growing organisms making them one of the causes of stress for data center administrators. Rack temperature monitoring ensures the optimum temperature and prevents molds from leaching on the racks and equipment. How To Manage Server Racks? An upsurge in temperature can happen at any time and anywhere within the facility. Perhaps, some protocols are applicable when this transpires. But, administrators of a critical-mission facility should not wait for this to happen before taking action. Proper management of server racks to avoid complications is a must before running a data center. - Sufficient Ventilation - Secure a ventilated server rack by installing switches at the top of the server racks. - Good Layout - For high-density racks, it is important to use the hot or cold layout to ensure containment of the hot aisle and cold aisle. This layout creates a clear separate path for hot air and cold air allowing the equipment to pull in the coolest air. - Choosing the Right Server Racks - With the variety of racks available in the market today, choosing the right one could be challenging. The right server rack will prevent heat build-up, ensuring an efficient data center. When choosing a server rack, certain factors require consideration. These includes: - Load Capacity - Flexibility for Design Changes - Proper Arrangement of Servers - Racks need to be filled with servers without open slots. If some slots are left open, the air will flow into the opening, disrupting the air that needs to go through. Using rack filler panels to obstruct the open slots is a good solution. Rack Temperature Sensor: A Solution Administrators need to be alert in any circumstances within the facility. They need to be prepared for the potential issues affecting the equipment’s reliability. However, the temperature rise is an imperceptible occurrence making it a difficult task. A slight increase is impossible to notice unless it goes beyond the point that humans can feel. A temperature sensor is a solution as it makes rack temperature monitoring easier. With proper placement, it can send a notification if the rack temperature surpassed the threshold. Real-time temperature is measured, allowing appropriate action at an early stage. As the sensor collects data on the environment, it will be converted into a record. Administrators can view this anytime to track patterns of temperature changes. For data centers, it is more than knowing what’s currently happening. Administrators should also explore the future possibilities that might risk the equipment. While servers with higher temperatures operate more efficiently, a single mistake could turn into a catastrophe. Rack temperature sensors will let you know if the environment is still favorable to the equipment ensuring a catastrophe-free data center. Rack Temperature Sensors Placement Although servers have built-in cooling and ventilation systems, their reliability still depends on the environment’s condition. Moreover, each of them has different heat tolerances. This means the cooling demand of every single piece of equipment or rack could hardly be known. Fortunately, the proper placement of sensors can be a solution. In rack temperature monitoring, sensors that are placed in the most appropriate location will deliver the most accurate result. When installing, it is important to place them closer to the points that you want to measure. It should also be noted to avoid direct airflow entering and leaving the equipment. These areas experience fluctuation most often. There are three different models for placing temperature sensors according to ASHRAE. These are managing the space, setting up space, and troubleshooting the space. The rack recommends placing a minimum of 6 sensors or more for a high-density data center. They are located at the top, bottom, middle, and back. The sensors on the back aim to monitor inlet and outlet air temperature. For Gartner, temperature sensors can be placed at three points in the rack. These are the bottom front, top front, and top back of the rack, which has the highest temperature. Shifting To Wireless As innovation escalates, wireless technology introduces easier placement of rack temperature sensors. Wireless temperature sensors have countless advantages in terms of cost, accuracy, and accessibility. With its capabilities, no wonder most of the companies in the industry have switched to wireless. Wireless Rack temperature monitoring not only saves cost for the construction during installation. It can also save labor costs. The recording is done in a more cost-effective approach since it does not require a worker to record manually. Additionally, the data center needs to function at full efficiency without interruption. This means that there is no room for problems caused by human error. With a wireless temperature sensor, there is no need to worry about the accuracy and validity of the data. Its accessibility is also one of the major reasons why the industry invests in wireless. The record can be accessed at any time regardless of proximity. It can also be sent through email, SMS, or any other application integrated with the system. AKCP Wireless Temperature Sensors AKCP offers wireless temperature sensors for more compendious rack temperature monitoring. With the amount of equipment within the data center, it is certainly critical. It requires precise data for better decision-making. And for AKCP, accuracy is not a problem. It can provide you with the most accurate data for a specified period. With this, you can be aware of “what” happens and “when” it happened. Moreover, taking action a second too late will create a big impact on its reliability. You wouldn’t want this to happen for sure. Message our AKCP sales team now!
<urn:uuid:147074b3-f606-4d7c-b4db-4e66b4e6f8a8>
CC-MAIN-2022-40
https://www.akcp.com/articles/rack-temperature-monitoring-in-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00149.warc.gz
en
0.912253
2,026
2.578125
3
Let me tell you What is Net Neutrality ? 'Net Neutrality' is simply the Internet Freedom of Free, Fast and Open for all. Internet is built around the idea of openness. It allows people to connect and exchange information freely, if the information or service is not illegal. Much of this is because of the idea of net neutrality. If you like the current state of the internet, then it is because of net neutrality. Currently, India is having the issue with the Net Neutrality. Telecom Regulatory Authority of India (TRAI) is planning to allow telecom operators like Vodafone and Airtel to block applications and websites in order to extort more money from consumers and businesses, which is nothing but an extreme violation of Net Neutrality. If there is no net neutrality, ISPs will have the power to shape internet traffic so that they can derive extra benefit from it. For example, several ISPs believe that they should be allowed to charge companies for services like YouTube and Netflix because these services consume more bandwidth compared to a normal website. Basically, these ISPs want a share in the money that YouTube or Netflix make. If there will no Net Neutrality then we don't have the free access of Internet. There will be separate package plans for each service we want to use. For Example - If you want to use facebook the you have to pay separately and for Whatsapp also pay separately. What to Do Now ? TRAI has released a consultation paper with 20 question and wants people to answer it and revert it via email by 24th April, in order to hear people’s opinion on Net Neutrality. So If we all come together and support Net Neutrality, then we can make internet openness. To support Net Neutrality you have to just follow below steps - - Visit SavetheInternet site and there just click on Respond to TRAI button - You will get the popup of consultation paper with 20 question. - Just Copy the text by Right-click->Copy (or Ctrl+C) and click on Done. - Now You will get the option to send an email for gmail, Yahoo and other. - Login from one of the option and on the email body text box, paste all the text you copied in above step. - Finally click on send. That's it you have to. So I request all our readers to support Net Neutrality, and also asked your friends and family members for do the same. There are many people who are not aware about the Net Neutrality, so you can guide them for this. Do it now, before its too late. So Come up, join this initiative and Save the Internet Lack of net neutrality, will also spell doom for innovation on the web. For the a YouTube channel All India Bakchod (ABI) had released a video asking people to Save the Internet. Do watch the video to understand the issue of Net Neutrality.
<urn:uuid:fa58d612-ff2f-4dfa-b477-7290c3757944>
CC-MAIN-2022-40
https://www.cyberkendra.com/2015/04/net-neutrality-save-internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00149.warc.gz
en
0.938307
606
3.015625
3
PKCS#15 (Public Key Cryptography Standards #15) is a Cryptographic Token Information Format Standard. PKCS#15 defines a standard allowing users of cryptographic tokens or smart card to identify themselves to applications, independent of the application’s Cryptoki implementation (PKCS #11) or other API. RSA has relinquished IC-card-related parts of this standard to ISO/IEC 7816-15. PKCS#15 and the more recent ISO/IEC 7816-15 standards describes a method of finding the files, objects and features of file based smart card and the specific content encoding of files representing PKCS#15 structures (ODF, PuKDF, PrKDF, CDF, AODF, SKDF etc). ISO/IEC 7816-4 describes a framework for implementing and using file based cards, but it does not describe how to discover which files, objects and features are contained on the personalized card. Cryptographic tokens, such as Integrated Circuit Cards (IC cards or “smart cards”), are capable of providing a secure storage and computation environment for a wide range of user credentials such as keys, certificates and passwords. Because of this, it is widely recognized that they offer great potential for secure identification of users of information systems and electronic commerce applications. The use of PKCS#15 tokens or smart cards for authentication and authorization purposes is hampered by the lack of interoperability at several levels. First, the industry lacks standards for storing a common format of digital credentials (keys, certificates, etc.) on them. This has made it difficult to create applications that can work with credentials from a variety of technology providers. Attempts to solve this problem in the application domain invariably increase costs for both development and maintenance. They also create a significant problem for end-users since credentials are tied to a particular application running against a particular application-programming interface to a particular hardware configuration.
<urn:uuid:4172eaf9-e2f4-4248-8335-77baa741ed5a>
CC-MAIN-2022-40
https://www.cardlogix.com/glossary/pkcs15-public-key-cryptography-standards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00149.warc.gz
en
0.918319
394
2.8125
3
Protect Your Data With Super Easy File Security Tricks! By: Andy Green Data security is an all-encompassing term. It covers processes and technologies for protecting files, databases, applications, user accounts, servers, network logins, and the network itself. But if you drill down a little in your thinking, it’s easy to see that data security is ultimately protecting a file somewhere on your system—whether desktops or servers. While data security is a good umbrella term, we need to get into more details to understand file security. File Security and Permissions As Microsoft reminds us, files (and folders or directories) are securable objects. They have access or permission rights for controlling who can read, write, delete, or execute at a very granular level through Access Control Lists (ACLs). And in Linux world, we have a similar, although far less granular, system of permissioning. Why have the concept of permissions in the first place? Think of an enterprise computing environment as a semi-public place – you’re sharing a data space with not just anyone, but other employees. So a file is not the equivalent of box with a lock preventing anyone from accessing who doesn’t have a combination or key. Well, there is encryption, but we’ll cover that below. Instead the assumption in a Windows or Linux or other operating system environment is that you want to share resources. The operating systems file system permission are there to provide a broad way to limit what can be done. For example, I want workers in another group to read our presentations, but I certainly don’t want them to edit. In that case, we’d specify — to be shown below — read and write permission for users who belong to group, and just read permission for everyone else. In the Beginning, There Was Unix-Linux Permissions Let’s look at a very simple permissioning system. It’s the classic Unix-Linux model, which provides basic read-write-execute permissions and a very simple method of deciding who these permissions apply to. It’s called the user-group-other model. Effectively, it divides the user community into three classes: the owner of the file (user), all those users belonging to groups that the owner is a member of (group), and finally everyone else (other). You can see this permission structure when you run an ls –l command: How do you specify a permission to add or subtract from a user-group-other? There’s the Linux chmod command. Suppose I decided that I’d like other users in groups I belong to have access to my-stuff-2. doc file, which I had been keeping private. I could do this: - chmod g+r my-stuff-2.doc Or now I want to take back and make private the presentation-secret.doc file, which I had allowed other groups to view and update: - chmod g-rw presentation-secret.doc The Unix-Linux permission model is simple and well-suited for server security, where there are system-level applications accessed by a few privileged users. It is not meant for a general user environment. For that you’ll need ACLs. What Are Access Control Lists? Windows has a far more complex permissioning system than Linux. It allows users to define a permission for any Active Directory user or group, which is represented internally by an unique number known as a SID (security identifier). Windows ACLs consist of a SID and another number representing the associated permission — read, write, execute, and more. This is called an access mask. The SID and the mask together are referred to as an access control entry or ACE. We’ve all seen the user-friendly representation of the ACE when we view a file or folder’s properties: Obviously, ACLs can make permissioning quite complex. In theory, you can have ACEs for each user that needs to access a file or folder. No, you shouldn’t do that! Instead, there’s the preferred method of assigning users to a group and then combining all those groups that need access to a folder into a larger group. This umbrella group is then used in the ACL. I’ve just described something called AGLP for Account, Global, Local Permissioning, which is Windows approved method for efficient file and folder permissioning. So if an employee moves to another project (or leaves the company) and therefore no longer needs access, you simply remove that user from the Active Directory group without having to adjust the ACE in the specific folder or file. Easy peasy in terms of file security management. And a sensible way to reduce security risks in an enterprise computing environment. And Along Came File Encryption If you’re paranoid, there is encryption, which is certainly a valid, if extreme technique, for solving the issues of file security. It may be safe, but certainly a very impractical solution to securing file data. Windows supports encryption, and you can turn it on selectively for folders. Technically, Windows use both asymmetric and symmetric encryption. The asymmetric part decrypts the symmetric key that does the actual block encryption/decryption of the file. The user has access to the private part of the asymmetric key pair that gets the whole process started. And only the owner of the folder can see the unencrypted files. Obviously, with one user in control of the encryption, this does not lend itself to allowing multiple users to share access to files and folders. Add on that the potential for losing access to the asymmetric encryption key, which is kept in a certificate, and you can have a self-made ransomware attack on your hands. And yes, you should backup encryption certificates! As we’ve been saying, the file system is where employees keep and share the content (spreadsheets, documents, presentations) that they’re working on now. It’s their virtual desks, and adding a layer of encryption is liking moving things around and making their desk even sloppier — no one likes that! — as well as being administratively difficult to manage. Pseudonymization: Selective File Encryption And this brings us to pseudonymization. The idea is to replace personal identifiers with a random code. It’s the same idea behind writers using pseudonyms to hide their identities. The GDPR says you can do this on a larger scale as a way to lessen some of the GDPR requirements. Generally, there would have to be an intake system that would process the raw data identifiers and convert them to these special codes. And there would have to be a master table that maps the codes back into the real identifiers for those processes that need the original information. Using this approach, employees could then work with pseudonymized files in which the identities of the data subjects would be hidden. The rest of the file, of course, would be readable. Partial encryption is perhaps one way to think about this technique. Like encryption, pseudonymization is considered a security protection measure (see the GDPR’s article 32), and it’s also explicitly mentioned as a “data by protection by design and by default” or PbD technique (see article 25). It’s also considered a personal data minimization technique — very important to the GDPR. Will pseudonymization spread beyond the EU’s GDPR and be adopted by the US in its own coming data privacy and security law? We will see! Best File Security Practices Enterprise computing environments are designed to help employees get their work done. Sure there are built-from-the-ground-up secure operating systems, but they’re meant for top-secret government projects (or whatever Apple is working on next). For the rest of us, we have to learn to work with existing commercial operating systems, and find ways to minimize the risks of data security lapses. Here are three easy-to-implement tips for boosting your file system security. - Eliminate Everyone – The default Everyone group in Windows gives global access to a folder or file. You would think that companies would make sure to remove this group from a folder’s ACL. But in our most recent annual Data Risk Report, we’ve discovered that 58% of companies we sampled had over 100,00 folders open to every employee! Sure you’ll need to grant Everyone if you’re sharing the folder over the network, but make sure to remove from it from ACL and then do the following RBAC analysis . - Roll Your Own Role-based Access Controls (RBAC) – Everyone has a job or role in an organization, and each role has with it an associated set of access permissions to resources. Naturally, you assign similar roles to the same group, and then apply to them the appropriate permissions, and then follow AGLP method from above. When implemented correctly, this should be easy to maintain while reducing security risks. Yes, this does require more than a little administrative overhead to maintain. - Minimal Least Privilege Permission – This is related to RBAC, but it involves focusing particularly on “appropriate” permission. With the least privilege model, you pare down access to the minimum that is needed for the role. Marketing may need read access to a folder controlled by the finance department, but they shouldn’t be allowed to update a file or perhaps run some special financial software. Administrators need to be ruthlessly stingy when granting permissions with this approach. I lied. These tips are super-easy to understand, but not super-easy to implement! You’ll need some help … We just happen to have a solution that will make these great tips easier to put into practice. This blog was brought to you by our partner, Varonis. You can attend their Camp presentation, The Modern State of Insecurity, on Day 2 at 10:45am in Grandroom C.
<urn:uuid:f70d94b8-c948-45a2-a562-e14142bf5f78>
CC-MAIN-2022-40
https://securesense.ca/protect-data-super-easy-file-security-tricks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00149.warc.gz
en
0.930928
2,095
3.21875
3
With data centers playing a pivotal role in the world’s transition to digital, every year, IT infrastructure operators need to adapt to the steady and constant growth in digital demand, and as a result, the steady growth in energy consumption. To keep pace with this digital growth, major digital operators have been undertaking ambitious actions for several years aimed at achieving carbon neutrality and Engie is no different. With a focus on green data centers and ensuring it contributes to the performance of digital infrastructures as part of a carbon-neutral trajectory, Engie has made several commitments to sustainability, including having already installed 34.4GW of renewable capacity, with the goal to reach 80GW by 2030. The company has also implemented 23GW worth of low carbon distributed energy infrastructures (including DHCs) with an aim to push this to 31GW by 2025, as well as 4GW of green H2 capacity by 2030. But why are these sustainable commitments so important? The most common way for data centers to have the electricity they need to function is to simply be connected to their local or national power grid. Depending on the country, these grids will provide them with electricity created by an energy mix they won’t master, however green they want to be and no matter how eager they are to reduce their carbon footprints. This simple truth – that data centers mainly rely on the grid to get energy – is also the clear reason why all of them have to install backup energy too. Matter-of-factly, there is no grid reliable enough to match the availability requirement for the highly critical activity of any data center – simply put, no grid is on 100 percent of the time. And, each time there is a shortage of electricity, it takes no more than 20ms for servers to stop functioning, crashing the IT production that is the data center’s mission. This technical constraint explains why electrical backup has to include inverters and batteries – to provide backup without breaking first and giving the main backup (generally based on diesel backup power generators) the time it needs to start and function properly. So, even if grid outages are quite rare indeed – and particularly in modern, developed countries where data centers are more likely to be found – they are still the only reason such heavy investments are made on energy assets. Data centers simply cannot suffer power interruptions – not even the possibility of them. Therefore, for years, data centers have invested in their on-site generation with UPSs. Those energy assets have been and still are a real burden from an economic standpoint. Firstly because of the large amount of CapEx they engulf and, secondly, because of the OpEx needed for their maintenance. These assets are also part of a larger challenge that is a boiling hot topic now: sustainability. How can data centers alleviate their carbon footprints and head for a greener way to function with these diesel engines as the only energy backup solution? Over the last two decades, data centers have dealt with energy this way – a combination of grid electricity for affordability and emergency diesel generators for reliability – and there were no alternatives. But those times may be over. Because what data centers are seeking is a reliable dual feed to create 100 percent availability for IT production, and it can now be done in a totally different way. Data centers can now access an energy prosumer model, where energy assets are positioned within a micro grid operated by an energy provider knowledgeable on energy market trading and thus enabling an effective integration between the micro grid and the larger grid. In this disruptive new model, energy assets – both those used to get and distribute the main source of energy and those needed to secure backup energy – are positioned within a microgrid. This microgrid is conceived and built by an energy provider. Depending on the possibilities of the location, and accordingly the specific requirement of the data center, the microgrid would include sustainable energy productions, such as photovoltaic panels, wind turbines, or even aggregate systems based on hydrogen, or biomethane. Switching to this prosumer model offers more than one benefit for data centers. For data centers – inside specific power purchase agreements – it is also an elegant solution to reach carbon neutrality by supporting the global effort toward sustainability. The dual feed inside the microgrid also guarantees high availability – two energy sources provide both primary and emergency power to the data center. And even if the microgrid is first created to answer only the data center’s needs, it also becomes an active component of the larger grid, contributing to resiliency and the transition toward renewable energy. With this approach, data centers stay focused on their critical mission: the IT production. The energy specialist is the one to take care of the microgrid – from conception, through the build, and during operation and maintenance – in exchange for a negotiated fixed price. Energy production is carried out in accordance with the needs of the data center and based on the primary source of energy available: gas, solar, or wind as an example. The prosumer model is also an easy way to answer the increasing demand for transparency and control regarding the energy consumption and sustainability measures of data centers. This disruptive model offers data centers access to renewable energy, both for their primary and emergency needs. For this, data centers are requested to use some CapEx to create the dedicated dual-feed microgrid and then to pay for the energy they consume. By doing so, they secure both the reliability and availability to deliver IT production with green energy. All in all, data centers investments not only bring them further on their transition to carbon neutrality, but it also makes them a key player to help their local community do the same. But that’s the beginning of a whole other story. More from Engie Engie to install more than 780 panels on 179,000 sq ft facility Companies build on previous deals signed in Belgium and the Netherlands. Proposed $1.2bn Horizeo installation in Saucats includes green hydrogen and energy storage - and a data center
<urn:uuid:464a0009-31b2-4d54-a9e3-8634ef0663ab>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/marketwatch/will-decarbonization-impact-future-data-center-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00149.warc.gz
en
0.94921
1,252
2.75
3
Blockchain has been the one of the hottest new technologies of the past couple of years. It’s important to remember that working with blockchain has important repercussions when it comes to information governance. With blockchain you don't have to know who is on the other side of the transaction. Blockchain is a technology and infrastructure that allows for the execution or completion of transactions without precisely knowing who is on the other side of that transaction. With blockchain, you don’t need identity verification. In the popular media, blockchain—and the relationship it has to virtual currencies such as Bitcoin—has a financial focus. But blockchain also can apply to any transaction or passage of data or contracts in any industry. Blockchain provides transparent, secure and immutable transaction transparency. There are three key characteristics of blockchain: the technology is transparent, secure, and immutable. Blockchain is transparent in the sense that you can look at the trail of the movement of data. You can see the exact line of steps where data has been passed along. That’s the “chain” in blockchain. That’s important, from an information governance standpoint. Though you don’t need to know or trust the recipient of your data, you do know that your data can’t be changed. No one can destroy your blockchain without your active participation and consent. You would be aware of the change. In effect, the technology engenders trust because of the viability of the data. Blockchain is secure in that there’s an encryption and hashing mechanism that requires a key in order to get access to the data (or the currency or the contract.) You can’t steal the data (or currency) because it’s distributed all over the world. And because of that distributed nature, in combination with the encryption and hashing, nothing can be altered without the key. Blockchain is immutable because past records can’t be changed. Think of a financial ledger. You can continue to append the ledger with information, but you can never change the original or base data. Blockchain facilitates the transfer of information or documents. The immutability of that transfer presents a chain of custody. By allowing the information to be distributed, the vast majority of the users would have to agree on any change. Blockchain provides an opportunity for efficient information governance. That’s why blockchain provides opportunities for efficient information governance. One application might be GDPR with its provision that allows someone to request your data. One possible application: it may be easier, after giving permission to distribute your data, to get that data back and revoke an organization’s right to use the data. The core of information governance concerns rules about the storage and transfer of information. Blockchain is a totally different way to transfer, consume and store data. Why the market for data intermediaries will become more challenging. As more sectors employ blockchain infrastructures, the market for data intermediaries will become very challenging. Many of these companies may cease to have value. Take the case of Electronic Medical Records. The chain of information from a patient to provider to payer needs to be secure. But with blockchain you know as a patient that the minimum necessary information is there, and that the information can’t be changed. Normally, the more people who touch information about you, the more risk there is to you. But with blockchain, you as a patient know that the information can’t be changed, and that only the right people have access to the information. Will blockchain redefine the financial services industry? In the popular press, blockchain has been more discussed within financial services companies, since it represents a new application of cryptography and information technology to the age-old problem of financial record keeping. That’s another reason why some people in our industry believe that blockchain technologies may lead to far-reaching changes in corporate governance. Many major players in the financial industry have began to invest in this new technology, and stock exchanges have proposed using blockchains as a method for trading corporate equities and tracking their ownership. Some even say that the existence of a “bank” as a financial “intermediary” may be threatened with blockchain. If that were the case, some relationships within financial services would change. An Oxford University Press essay published this year talks about those changes “The lower cost, greater liquidity, more accurate record-keeping, and transparency of ownership offered by blockchains may significantly upend the balance of power,” the essay suggests. Blockchain may provide more flexible sourcing in the energy sector. The World Economic Forum published another report this year that suggests an application for blockchain in the energy sector. “A blockchain-based platform enables us to simply and securely balance the grid from both ends at the same time,” the reports authors, Jon Creyts and Ana Trbovich, write. “[This] dramatically increase[s] asset utilization in a capital-intensive network. … Expect to see more straightforward applications of energy blockchains in the near term. One of the first will allow you to select the source of your power.” It’s hard to predict the overall effect that blockchain will have on information governance, especially given the open source nature of the technology. It’s possible, for instance, that players and solutions will be highly fragmented, as is often the case with new technologies. That Wild West may be balanced by big players that begin to build blockchain solutions. The core attributes of blockchain align well with information governance. The core attributes of blockchain—transparency, security and immutability—align well with best practices in information governance. Good information governance also relies on these same three things. If you don’t know what you have, if your data is in an unsecured location or if you have no control over your information, you are courting disaster. If you clean up your data environment and you control how to get to information and where it’s stored, you’re on the right path. Practicing the three tenants of blockchain—transparency, security and immutability—is valuable regardless of whether your organization is going to be using the technology. We’re happy to advise those organizations seeking to build an information governance construct around their blockchain efforts.
<urn:uuid:8426e40d-5fa6-49b0-a54a-7a5c021f0520>
CC-MAIN-2022-40
https://blog.doculabs.com/blockchain-and-information-governance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00149.warc.gz
en
0.920949
1,306
3
3
We use social media to keep up with friends, family, and colleagues; but these connections also put as at risk. Social site creators are becoming more aware of the vulnerabilities associated with storing private user information, but we as users must also do our part to protect ourselves. This starts by being cognizant of safe security practices. Here are four simple methods to keep your accounts secure and private. Strengthen Your Passwords It does not take much time for a determined hacker to break into an account ‘secured’ by a simple password. With personal password security, the goal is to limit access to a sole user – yourself. Strengthen your authentication with these best practices (while still remembering your password): Use a combination of words, numbers, symbols, and mixed-case letters Use a passphrase (a random mix of words) for complexity Utilize a trusted, encrypted password manager to store passcodes Use Two-Factor Authentication Two-factor authentication is a security function where users receive an email or text message with a code to verify their identity when accessing accounts. Facebook, LinkedIn, Instagram, and Twitter support this tool, allowing the user to add an extra layer of security by ensuring only the authorized individual can login. Add Friends With Caution Be careful when allowing new friends access to your profile. A persistent hacker will create fake profiles, sending friend requests to a target’s friend list in an attempt to seem more credible. This lowers your doubts about the identity of the requester. Once the hacker has access to your profile, any personal information you have posted can be used for future exploitation, including identity theft. Conduct Security Checkups Many social media platforms send an email if a login attempt is made from a new or unrecognized location. Facebook and LinkedIn settings allow you to view all active login sessions and previous login locations. Reviewing social media security settings periodically and becoming familiar with new security features will help protect you from intrusions. Most social media sites are free to use, but it does not mean our private data is safe from intruders. We share parts of ourselves in exchange for access to the online world. Be intentional about what you share, who has access, and how that information is protected.
<urn:uuid:36a94348-37ff-4cf7-ab7d-779e22a0a89e>
CC-MAIN-2022-40
https://edwps.com/securing-social-media-sence-turner/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00149.warc.gz
en
0.91307
464
2.9375
3
November 19, 2019 | Written by: Niina Haiminen Categorized: Big Data Analytics | Publications Share this post: When you buy food from the supermarket or order at a restaurant, an apple can clearly be identified as an apple. But how can you be sure of the contents of a spice mix, ground beef, or fish cakes? Unintentional contamination and intentional fraud can occur in the food supply chain. Food authentication is emerging as an important field, and at IBM Research we are applying a multidisciplinary approach in collaboration with industry and academia to address this global challenge. As food supply chains become increasingly global and complex, more sophisticated approaches to ensuring safe food are needed. The Consortium for Sequencing the Food Supply Chain aims to explore the application of genomics and big data to food safety in order to generate new insights and understanding of the total supply chain, ultimately moving towards prediction and prevention of food safety incidents. Consortium founders IBM Research and Mars Incorporated, together with member Bio-Rad Laboratories and consulting professor Dr. Bart Weimer of the UC Davis School of Veterinary Medicine, have been testing a new way to authenticate the composition of raw materials. Through the application of metagenomics, analytics and cloud technology we are generating new insights and understanding of food supply chains. As described in our co-authored research paper Food authentication from shotgun sequencing reads with an application on high protein powders, published today in the Nature Partner Journal Science of Food, the consortium has created a new pipeline for food component identification that can simultaneously detect multiple expected and unexpected components. In a world where global food supply chains are increasingly complex, this research showed that such a pipeline has potential for reliable ingredient authentication. Food authentication is becoming increasingly important, as contamination and fraud can occur at any point within a supply chain. IBM researchers are collaborating with industry and academia to use metagenomics, analytics and cloud to build new ways to authenticate the composition of raw materials. How does the pipeline work? This new approach involves evaluating DNA and RNA sequencing data from food against a database of thousands of plant and animal genomes. In this proof-of-concept work, a food authentication pipeline was initially developed using simulated and experimental datasets, and then applied to 31 high protein powder (HPP) samples. Analyzing the HPP samples was exciting as this was the first time that food ingredient sequencing at this scale had been done, and we did not know what to expect from applying the pipeline. This research indicated traces of other ingredients introduced — possibly during ingredient transport or processing — and the effectiveness of sequencing in verifying authenticity. More work and data is needed and ongoing efforts within the Consortium for Sequencing the Food Supply Chain include assessing microbiomes for potential food safety applications. Together, we are building new capabilities through advancing science and technology to better ensure the safety, quality and authenticity of foods and their ingredients. Haiminen, N., Edlund, S., Chambliss, D. et al. Food authentication from shotgun sequencing reads with an application on high protein powders. npj Sci Food 3, 24 (2019) doi:10.1038/s41538-019-0056-6
<urn:uuid:102e2f7b-c0ee-467f-a21c-bb28adec2a82>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2019/11/food-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00149.warc.gz
en
0.926747
650
2.625
3
Marginalized, stigmatized and minority populations have been subject to the traumas, direct impact and verbal instigation and provocation of hate crimes for centuries. Today’s world is loaded with online and mobile communication platforms, apps, open, deep and dark web avenues for criminal collaboration. Hate crime has become an increasing concern to social order in areas of Europe, including the United Kingdom, with specific regions and countries taking proactive steps to prevent, monitor, and predict hate crime for a safer society. Individuals of all genders, religions, sexual orientations, race, beliefs and ages are able to live with greater security as government and policing organizations collect critical disaggregated and detailed online data via OSINT tools (Open Source Investigation Tools). With key insights, agents and analysts are empowered with simplified, automated predictive analysis of potentially hateful online content. Social media intelligence powered by robust OSINT investigation tools and web monitoring technologies have provided reliable and precision-focused takeaways to government & law enforcement agencies with zero-tolerance hate crime policies and laws. We’ll discuss the foundation for the EU’s developments as a progressive leader in the prevention and targeting of hate crime via online data collection, analysis, recording and use in policing and government organizations – from a few decades back to date. Data & Hate Crimes: Prevention & Monitoring in National Security & Law Enforcement Agencies The mission to eliminate hate crime and online content reflecting malicious intentions, is now an integral aspect of the European Union’s government security, municipal and regional policing in various countries. In as far back 1996, the ECRI established a policy to actively collect and assess online or print content reflecting hate crime, with one source also indicating just under 2 decades later: “The 2013 Council Conclusions on combating hate crime in the European Union14 stress in particular the need for an efficient collection of reliable and comparable data on hate crimes, including, as far as possible, the number of such incidents reported by the public and recorded by the authorities and the bias motives behind these crimes.” By 2004, the ECRI indicated that a streamlined collection and monitoring system was fundamental to the successful prevention of racial hate crime, and in 2007, the ECRI, insisted on the development of proprietary system or solution to effectively understand, define and prevent verbal hate crime expressed online, or instigating its occurrence. The aforementioned source indicates that every detail of hate crimes, (using the example of racial hatred), filed in police reports are essential to progress for law enforcement and policing agencies: from locations, specific violent or verbal threats, vocabulary choices and lexicon used to expressed racial hatred, to times and dates, individuals or groups of individuals associated with targeted crime and hatred of specific minorities and populations. How it Works: Recording & Analyzing Hate Crime Content Via OSINT Tools & Predictive Analysis “In its regular country reports, the Council of Europe’s human rights monitoring body, ECRI, identifies gaps and recommendations for improvement in the area of hate crime, and more specifically in recording and collecting data on hate crime,” the aforementioned source indicates. And the principle measures taken in order to fill the gaps required to optimize the efficiency of crime investigations include: - Organized recording, collecting and assessment of hate crime data via advanced solutions and internal system that scan the web’s communication for accurate and relevant intelligent insights. - Tagging and identifying key terminology and jargon as hate crime with biased indicators embedded into sophisticate OSINT tools, that in turn provide concrete predictive analysis of social media and online content of all forms that express, instigate, provoke or connotate hate crime or their escalation. Release of data in a modular and controlled manner that allows government and policing organizations to define acts and spoken hate crimes, allowing analysts and personnel to efficiently group main actors and identify the links between then in organized hate crime. - Police officers should always have easy access to flagging and identifying data that presents evidence of hate crime or its development, or documentation of negative sentiment associated with language and slang, street words, racial slurs, prejudice or malicious content. - Support of hate crime victims and their families to ensure in-depth investigations look out primarily for those at risk, and for social order and justice at large, along with equality and human rights. The United Kingdom has passed certain policies as critical for policing and investigative action against hate crime to grow and succeed. Mandatory publicly available guidance and lists of bias indicators of hate crime, along with flagging of hate crimes or biased motives of content is a compulsory and integral aspect of the UK’s policing system. The country has actively participated in a 6-way collaborative initiative with other EU countries to prevent hate crime, an active member of the ‘Facing all the Facts’ project powering training and identification methods to prevent hate crime. Proprietary AI-driven software scans the web’s surface, deep and dark layers to ensure leveraged social order, safety and peace of mind for all races, genders, nationalities or religions, or sexual orientation – anywhere in the world. With intelligently extracted data, living safer and in a more accepting world is now possible – in the real world, and online. Cobwebs Technologies allows national security, government and law enforcement organizations to record, collect, analyze and apply intelligent insights collected by our AI-Powered Web Intelligence platform – the ideal web monitoring solution for crime investigations worldwide. With protocol, procedures and concrete adoption of system data assessment, our society can become a safer place for any individual of all diverse backgrounds. With justice only a few clicks away, gather all of the information required to complete a crime investigation with precision and powerful artificial intelligence at your fingertips.
<urn:uuid:fd245d3d-8f09-46a1-9e9f-dd45f232fac9>
CC-MAIN-2022-40
https://cobwebs.com/crime-investigations-eu/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00149.warc.gz
en
0.914214
1,164
2.953125
3
Indonesia Passes Personal Data Protection Law The Bill Will Apply to Local Businesses As Well As International Corporations. Indonesia has passed a bill protecting personal data after debating it since 2016. The nation now joins other Southeast Asian countries like Singapore and Taiwan that have specific laws protecting personal data. In light of recent data security breaches in the nation, the Indonesian government feels that passing the Personal Data Protection (PDP) Bill will be essential. Due to the poor data protection policies, the country has been an easy target for cybercriminals. Last year, the personal data of over 279 million Indonesians were allegedly leaked and sold on a hacker platform. Earlier this year, the hackers nicknamed Bjorka leaked 20GB of information from a security breach, containing the personal identifiable information (PII) of 105 million Indonesian citizens. What Lead To The Adoption? The nation’s Minister of Communications and Informatics, Johnny G. Plate, welcomed the decision as a significant step and a necessity for advancing connectivity in the domestic digital market. The nation has recently been the target of several cybersecurity threats, with the hacker collective “Bjorka” alleging to have access to the data of numerous official websites, presidential letters, and secret intelligence agency documents. In August, the same gang claimed to have gotten SIM card users’ contact information as well as their national identification numbers, according to ZDNET. The security flaws made it clear that the data protection bill needed to be passed immediately. Joko Widodo, the president of Indonesia, emphasized the necessity for key ministries to collaborate and look into the suspected breaches of personal data. Data from Surfshark placed Indonesia third among the nations most frequently harmed by data breaches, with 12.7 million local accounts compromised. How Will the PDP Bill Act? The Bill is anticipated to compile all current and new regulations into one. 32 laws currently exist in Indonesia that deal with the protection of personal data. Puan Maharani, the Speaker of the House of Representatives, had the following to say about the act’s passage: This PDP Bill will provide legal assurance so that every citizen, without exception, (has full control) over their personal data. Thus, there will be no more tears from the people due to online loans that they don’t ask for, or doxxing that makes people uncomfortable. She also declared that derivative rules, including the establishment of a supervisory agency tasked to protect the public’s personal data, could be formed immediately after the Bill was ratified. Based on the General Data Protection Regulation (GDPR) of the European Union, the Personal Data Protection (PDP) Bill of Indonesia includes a number of international provisions that are not currently covered by local laws, such as sensitive personal data and a data protection officer. Additionally, the Bill will regulate all methods of data processing, such as data collecting, storage, updating, and correction, as well as deletion. Under the Bill, personal data controllers will be required to update and correct errors in personal data within 24 hours after receiving the request to do so. According to local outlet, the Jakarta Post, those who are deemed to have breached the law may face up to six years in jail and will pay penalties of up to 2% of an organisation’s annual revenue. Indonesia has an estimated 220 million internet users, according to Tempo.co. The country was also projected to account for 40% of Southeast Asia’s 2021 e-commerce gross merchandise value, at $70 billion, as stated by ZDNET.
<urn:uuid:a8481726-fb62-4f41-9134-caafc4b16ce5>
CC-MAIN-2022-40
https://heimdalsecurity.com/blog/indonesia-passes-personal-data-protection-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00149.warc.gz
en
0.933494
741
2.546875
3
Recent surveys and statistics show that remote working is at an all-time high and shows no signs of stopping, a testament that workplace practises are indeed changing. Of course, the concept of remote working is nothing new. Since the dawn of the public Internet in the 1980s, we have imagined a world where workers are not bound by the physical confines of an office. In fact, remote working was the norm long before central business districts and travelling to work even existed. Looking back through time, skilled traders worked out of their homes. Blacksmiths, leather workers, potters and carpenters all set up shop at their residence, produced, and sold their goods from home. The advent of the Industrial Revolution brought with it the need for automation and the creation of centralised factories. Large-scale production and machinery required employees to be at the factories in order to complete their work. Right about this time is also when other professions started committing to allotted office areas. This however, did not last forever. Fast forward to just after World War II with advancements in technology paving the way for the modern workplace. Thanks to the proliferation of home and portable computing, the Internet and public WiFi, businesses today can operate without the need of office space and with employees from virtually anywhere around the globe. Are they actually working? Changes in work practices have been driven by evolving management styles as well as changes in employee behaviour. Traditionally, the main argument against home working was lack of control, with employers asking themselves the question: “How can I tell if my staff are actually working if I cannot see them?” Today’s best-practise managers are moving away from micro-management and are placing more emphasis on outcomes and results. This typically comes with an increased level of trust, but also calls for harder decisions if results are lacking. Flexible jobs have come under plenty of scrutiny from academics, news organisations, businesses, and government agencies—all looking to provide some insight into how job flexibility affects businesses and marketplaces. Numerous studies, corporate white papers and news articles offer an incredible array of stats about remote work and the many benefits telecommuting offers to employers and workers alike. With that in mind, let us explore a few statistically backed facts about remote working. A productivity boon Let’s be honest – an office has many distractions like water cooler gossip, impromptu meetings, and loud colleagues. According to surveys , 86 percent of workers prefer to work alone in order to hit maximum productivity, 61 percent agree that loud co-workers are the biggest office distraction and 40% consider impromptu meetings (such as other employees stopping by their desk) as another major source of work interruption. Conversely, more than two thirds of employers report increased productivity for their telecommuting staff. Aside from being able to focus more with less distractions, remote employees also typically continue to work whilst sick (without infecting others), return to work more quickly after medical issues and do not take full days off in order to run errands or schedule appointments. A decrease in real estate costs and overheads Businesses of all sizes report significant reductions in operating costs by allowing their employees to work remotely. Forbes magazine reported that some organisations saved as much as $78 million just by letting half of their work force to telecommute. For example, American Express reported annual savings of around $15 million by offering flexible remote work options. The realised savings do not just come from reduced office space. Another large organisation – Aetna has reported that the annual voluntary turnover for its employees who work from home was in the 2 to 3 per cent range, compared to a company-wide turnover of about 8 per cent. Keeping in mind the high costs of hiring and training new employees it’s easy to see how simply allowing staff to work remotely can also be used as a great talent retention tool. It’s the future A global study by the International Workplace Group in 2018 , which quizzed more than 18,000 professionals from a wide range of industries across nearly 100 countries, concluded that remote working is on the rise. For example, today, 50 per cent of Australian employees work remotely for at least half of the week while more than two-thirds work at least one a day a week outside the office and these numbers are only going to increase as Internet speeds and coverage reach all-time highs. Such flexibility was not an option just years ago. According to Gartner, by 2020, organisations that allow remote working will see an increase in their employee retention rates of 10% or more . These reported trends are confirming one thing: gone are the days of working 9-5 from a fixed office. Businesses are embracing new and exciting working models, which are proving to be a win-win for employer and employee alike. We are reaching the tipping point and one day soon, flexible working will simply be referred to as ‘working’. Don’t get left behind So, realising the great benefits of telecommuting and the fact that it’s almost an inevitability what can you do to ensure that your business doesn’t get left in the dust? This is where our NetConnect solution comes into the equation. NetConnect – is the ideal enabler for remote working without compromise on business security and continuity. With NetConnect businesses can quickly and easily expand their office beyond the physical space, increasing worker productivity and efficiency. To learn more about how you can propel your business into the future with NetConnect visit the NetConnect product page.
<urn:uuid:b8d22ea4-9157-4fa0-9fce-610c6c48ecef>
CC-MAIN-2022-40
https://www.northbridgesecure.com/telecommuting-the-secret-behind-commercial-success-in-2019/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00350.warc.gz
en
0.963292
1,144
2.65625
3
Have you ever seen someone plug a USB dongle into their device in order to sign in to something? Or worked for a company that required you to use one whenever you unlocked your laptop, or logged in to an important account? These authenticators are called hardware security keys. Some people will also refer to them as just security keys, or two-factor security keys. Here, we’ll break down what these dongles are and how they make it harder for criminals to gain access to your devices and accounts. What is a hardware security key? A hardware security key is a way to prove that you or someone you trust – and not a criminal – is trying to access or sign in to something. They’re known as a “possession factor” because they prove you physically own something used to authenticate your account. Security keys are a form of second or multi-factor authentication (MFA). This means that when you log in with your normal credentials – which could be a four-digit pin code on your phone, or a username and password on a website – you’ll be asked to provide your security key, too. Not all devices and services support these keys. But the situation is improving all the time. You can also use security keys with many single sign-on services like Okta and password managers including 1Password (more on that later). The benefits of using a hardware security key You might be wondering: “Okay, it’s a second form of authentication – how exactly does that keep out criminals?” Think of it this way: Imagine you’re the ruler of a castle. And you want to make sure that only your most loyal knights are allowed inside. You could create a password for the front gate, but what if one of your enemies overhears it? To be on the safe side, you could give your knights a brooch. Then you could tell your guard at the front gate to only allow people through who know the password and possess the brooch. Of course, it’s not a completely perfect system. It’s possible an assassin could overhear the password and steal a brooch from one of your knights. But it’s very unlikely, which makes the system far more secure than just using a password. Hardware security keys are a lot like the brooch – a physical item used to authenticate your account in addition to a password. But they aren’t the only form of multi-factor authentication (MFA) available. Instead of providing a physical key, you might be familiar with other MFA options, like having a one-time code sent via email, text message, or an authentication app like Authy. But a security key could be considered more secure than most of these methods. Why? Because it’s a physical object. A criminal is unlikely to target you specifically, find out where you work or live, travel to that location (or send someone on their behalf) and try to steal your key. The process is simply too expensive and time consuming, especially when they can use other tactics like social engineering. The downsides of hardware security keys Nothing is perfect. If you’re thinking of using a hardware security key, you should also be aware of the drawbacks and plan accordingly: Hardware security keys cost money. Physical security keys are generally affordable, but they aren’t free. Still, buying one is arguably a small price to pay for securing your digital life. Many companies will also offer their employees free or heavily-discounted security keys to use at work. You have to take your key with you. Most of them are small, but it’s one more thing to keep in your bag, on a keychain, or stuffed in a pocket. You can misplace or lose a physical security key. Many services will let you authenticate another way – like entering a recovery code – if you forget, lose, or destroy your hardware security key. Nevertheless, it’s never fun to arrive at the office and realize that you’ve left your authenticator at home. Some keys only work with specific devices. There are all sorts of security keys that support USB-A, USB-C, lightning, NFC, or a combination of all four. Make sure you choose a key that works with all your devices, or consider using multiple keys that cover everything you own. Using a hardware security key with 1Password Should you use a hardware security key to protect your 1Password account? That’s up to you. 1Password is already secure by design. All of your passwords and other saved items are protected by two things: your 1Password account password and your Secret Key. Only you know your account password, and your Secret Key is generated locally during setup. The two are combined on-device to encrypt your vault data and are never shared with 1Password. We have many protections in place to stop criminals from accessing our servers. But even if a thief somehow slipped through, they would only have access to a bunch of encrypted gibberish. All of the data would be worthless without both your account password and Secret Key. But if you would like an extra layer of protection, you can secure everything in your private vaults with a security key too. This means you’ll be asked to authenticate with the key when you sign in to your 1Password account. The bottom line Hardware security keys are an excellent form of multi-factor authentication. You might want to use one for all of your devices and online accounts, or only for a select group that you think should have a higher level of security. Not ready to take the plunge? You can still secure your digital life by using a password manager. 1Password will help you create, store, and autofill strong passwords for all your online accounts. Our security model also ensures that only you can access everything that you’ve saved in your private vaults – so you can be rest assured that you’ve put your safety first.
<urn:uuid:104ad342-bdb7-4404-86f5-b2508315eb6b>
CC-MAIN-2022-40
https://blog.1password.com/hardware-security-keys-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00350.warc.gz
en
0.947479
1,257
2.75
3
Data auditing is the assessment of data for quality throughout its lifecycle to ensure its accuracy and efficacy for specific usage. Data performance is measured and issues are identified for remediation. Data auditing results in better data quality, which enables enhanced analytics to improve operations. Issues Uncovered by Data Auditing - Identifies systemic data issues - Incomplete data sets - Data security gaps - Barriers to data access - Siloed data - Insufficient depth or breadth of data collection Data Auditing Benefits - Identifies systemic data - Improves data quality - Enables more effective data analytics - Provides risk assessment - Supports compliance adherence Why Data Auditing is Needed Data auditing underpins all data-related activities. Organizations learn where their data is as well as gain insights into its quality, security, usage, and efficacy as a resource for operations and analytics. Three Key Data Auditing Functions 1. Data Quality Identifies inaccurate data and root causes—allowing organizations to implement processes to remediate issues. 2. Regulatory Compliance Helps organizations adhere to corporate, industry, and government regulations by providing deep visibility into the location, usage, and data security. 3. Improve Operations From sales and marketing to customer service and human resources, data auditing up-levels data quality, making operations run more smoothly and effectively. Questions that Data Auditing Answers Data auditing answers questions related to data that uncover issues, reinforce compliance, and support strategic uses of data. - What data is being collected? - Should other data be collected? - Is unnecessary data being collected? - Where is data stored? - Does data storage meet operational and regulatory requirements? - Are the appropriate data protection systems in place? - Has a lifecycle been defined for data, including an “end of life” date? - Have functions been assigned to all data elements? - Has data been appropriately classified? - What is the process for honoring a request to delete data? 8 Ways Data Auditing Helps Organizations - 1. Supports the safeguarding of data assets - 2. Ensures that data access controls are working - 3. Flags and identifies the source of unauthorized data access - 4. Identifies policy flaws that could introduce vulnerabilities - 5. Helps improve internal data-related processes - 6. Includes forensic analysis to detect fraud, intrusion, and insider threats - 7. Supports rapid response to data-related issues - 8. Reviews third-parties' (e.g., partners’, vendors’) activities How Data Auditing is Performed Five Key Components Data is stored and used across organizations as well as in off-prem storage and cloud applications. Data auditing must engage the creators, collectors, users, and managers of all data. By engaging with stakeholders who represent an organizations’ information, data auditing delivers valuable insights into data collection, storage, and usage processes. Also, understanding the unique challenges of stakeholders helps focus on areas that are of specific interest to those groups. Successful data auditing depends on reviewing all information, which means that its location must be identified, and it must be clear what the data element is. The first data audit can be tedious, because data maps must be created. When creating data maps, it is important to include data that is located in offsite storage, with partners, and in cloud applications. Also, unstructured data should be included in data auditing programs. Once data maps have identified the location of all data elements, auditing is easier, and more sophisticated work can be done. Before beginning data auditing, establish goals and success metrics. These should be based on the organizations’ needs and take into account stakeholders’ objectives. Specific objectives that need to be considered are data quality (i.e., with regard to accuracy), depth, breadth, and consistency, among other criteria. The ultimate objective of data auditing is to ensure the optimal performance of data. A clear understanding of where data resides is required, followed by identifying any issues that need remediation. Data cleansing is an important step in data auditing; determining what is not necessary (e.g., obsolete, duplicate), and then archiving or destroying it. Data auditing should also include an assessment of data quality. Once data has been cleaned up, systems and processes need to be established for ongoing data auditing. These include deploying technology to automate many auditing functions, such as checking for data accuracy, consistency, timeliness, validity, classifications, security, and compliance. Automation leaves data auditors free to focus on more meaningful work that cannot be done effectively with automated tools, including reviewing anomalies, recommending remediation, and interpreting analytical results. 5. Maintenance and Monitoring The final component supports keeping track of data throughout its lifecycle. Data auditing is a continual process that uses data policies and procedures to ensure that data is properly managed. With data auditing, data creation, collection, usage, storage, and destruction are monitored to ensure adherence to rules and identify anomalies or issues related to data quality and data security. This also includes maintaining systems and evaluating processes with an eye toward prioritization and optimization. Evaluating Data Quality Assessing the quality of data is a critical function of data auditing. Technology plays a key role in this, automating the process of evaluating data quality. Data auditing takes accuracy into account, detecting errors produced by humans, bots, algorithms, and system glitches. It also evaluates consistency to ensure that specified formats and protocols are followed for each data element. Automated Data Auditing A key role of technology in data auditing is automation. Many repetitive tasks lend themselves to automation. Automating data auditing functions expedites the evaluation of data health criteria, such as accuracy, consistency, and usage. With software, workflows can be created. Automation has a number of benefits, including: - Automate data element classification - Eliminate error-prone manual processes - Increase use of metadata connectors - Improve stakeholder satisfaction - Enhance data quality Don’t Be Daunted by Data Auditing Data auditing prevents the cascade effect of poor data quality and security which can negatively affect all areas of an organization. It provides visibility into all of an organization’s information. With data auditing, data is routinely assessed to detect errors and anomalies. Done well, it also identifies data silos that should be rearchitected to provide cross-organizational access. An organization’s first foray into data auditing can be daunting. However, it is not hard once a process has been created, and the results far outweigh the effort. The results of regular data auditing are improved analytics and business operations along with significantly reduced costs for data maintenance. Last updated: 06/02/2021 Secure Remote Work
<urn:uuid:ed743242-077d-4283-80ee-f00adb3402bd>
CC-MAIN-2022-40
https://www.egnyte.com/guides/governance/data-auditing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00350.warc.gz
en
0.913889
1,463
3.265625
3
People have been talking about ‘big data’ for years, but the buzz has now intensified as more have begun to understand its potential and look for ways to exploit it for their organisations. One of the big challenges for CIOs is to make use of the relevant tools, and to create a culture in which others appreciate how they can support the effort. Big data is a collection of data sets too large and complex for regular database management tools, in volumes of petabytes (1m gigabytes), even exabytes (1bn gigabytes). It makes it possible to measure human, business or scientific patterns in fine detail, and can provide highly valuable insights to support the development of products and services. The potential to exploit big data is increasing along with the overall volume. It can take in streams of data from sources such as digital sensors and cameras, which can track industrial activity and environmental change, and social media, which can provide evidence of people’s attitudes and preferences. But it is very messy. As the volume of data grows it can be expensive to store, requires multiple servers for processing, and there is no ‘silver bullet’ IT solution. There are software frameworks that can be used in managing big data, such as MapReduce, which supports developers in writing relevant programmes; but it still requires a lot of work to establish how the data should be split, valued then pulled together. In addition, the process is aimed at extracting packets of information that have a high value for the organisation, and these are unlikely to align cleanly with the original structure of the dataset, and to come from only a small proportion of the total. Some experts have pointed out that as more data is used much of it is duplicated – just think about data back-ups or tweets that are retweeted – and this reduces the proportion of extracted information against the total. It is also necessary to convey the results in terms that make sense to business leaders. Data has to be presented in terms that are clearly relevant to the challenges and opportunities facing an organisation, and this requires the specialists to tell a story that others can understand. To have any chance of exploiting big data successfully, you need people with the programme writing skills to identify the information amid the mountain of data. A recent report by the Said Business School of Oxford, Analytics: the real world use of big data, suggests that organisations have been acquiring some of these skills: a worldwide survey of businesses and IT professionals showed that about three-quarters now have big data projects either in development or under way. But it also suggests there are limits to their ambitions: less than a quarter had the necessary skills and resources to deal with unstructured data. This is restraining them from trying to harness big sets of data from outside their organisations, especially from the more chaotic world of social media. In the shorter term, CIOs will have to limit their ambitions to match the resources their organisations can afford; but it’s worth thinking about training programmes that could equip their programmers with the skills to extract information from data they obtain from outside. At the same time they need to ensure that any big data project begins with a clear view of what the organisation wants from the process. This is where it becomes important to have a clear understanding and a shared sense of purpose between the techies and the business managers. A big step towards achieving this would be ensure that business directors and managers are data literate, with an understanding of where the insights can be found in data relevant to their activities, and what patterns would be valuable. CIOs can make a difference in encouraging that data literacy, so that colleagues grasp the importance of existing data sets, then begin to think about what other sets could provide benefits to the organisation. Senior executives need to think about their business issues and how they relate to data available from any source, even if it may be very difficult to collate and classify, then see what they can do about obtaining and using it to their benefit. That can provide the driving force for the programmers to prove their worth. This may be a daunting prospect, but it will often be the case that unstructured data is more likely to yield the genuinely fresh insights that can give an organisation an edge in its business. CIOs and their colleagues need to think about how they can harness big data to obtain big advantages. David Clarke MBE, Group Chief Executive Officer, BCS, The Chartered Institute for IT
<urn:uuid:94faca46-dfbc-4ff5-a530-11db1ab29f90>
CC-MAIN-2022-40
https://www.cio.com/article/202413/big-data-messy-difficult-and-valuable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00350.warc.gz
en
0.959525
933
2.6875
3
While telehealth has been widely available for decades, the COVID-19 pandemic prompted a huge surge in virtual care use. Yet telehealth is limited in scope by in-person lab testing. 70% of today’s medical decisions depend on laboratory test results, and the future of virtual care depends upon providers’ ability to merge remote diagnostics with virtual care and telehealth services. Companies that use telehealth can launch at-home diagnostics to create a full remote care cycle where patients and users collect samples at home. The samples are mailed to labs and those results are reviewed by providers, who prescribe additional follow up care. At-home self-testing can screen for infectious diseases, fertility markers, allergies, cancers, comprehensive metabolic panels, and more. Telehealth is popular with patients, providers, and even insurance companies. It can test for everything from HIV to cancer. Telehealth during COVID-19 Telemedicine has been in regular use for a hundred years, with ships in the 1920s using radio to communicate with doctors on land. Hospital-based telemedicine expanded in the late 1950s and early 1960s to accommodate stroke victims and support intensive care units. However, COVID-19 ushered in a telehealth boom in the United States on a huge public scale to accommodate social distancing rules. The U.S. Department of Health and Human Services reported “massive increases in the use of telehealth helped maintain some health care access during the COVID-19 pandemic, with specialists like behavioral health providers seeing the highest telehealth utilization relative to other providers.” There was a 154% increase in telehealth visits during the last week of March 2020 alone. Throughout the pandemic, telehealth reached new patients, including marginalized groups like African-Americans. Penn Medicine hospitals in Philadelphia reported that Black patients’ visit completion rates increased to 70% from 52% with access to telehealth services. As a result of its popularity, insurance providers have started offering virtual care plan options. UnitedHealthcare provides “a virtual-first health plan that offers an integrated approach to provide care both virtually and in-person.” Some argue that virtual-first primary care, where patients engage with telemedicine services from the beginning, are likely to become the “starting place for most primary care.” At-home health testing Home diagnostics allow providers to collect important healthcare data remotely. Patients use kit devices to collect saliva or blood (in this case, using a DBS card test) and mail samples to the lab. These labs follow the same requirements and protocols as labs that process in-clinic diagnostics. Certain types of at-home tests, like HIV self-tests, have been available since 1996. Self-testing not only helps marginalized and vulnerable patients access healthcare – no need to take time off work to visit a lab; no need to call in accessible transportation – but it also serves providers. At-home health testing can lower the workload for providers who care for patients with chronic conditions. At-home health testing could help curb rapidly scaling STI rates. (According to the CDC, “data shows another sharp uptick in syphilis and gonorrhea cases and detection gaps for chlamydia, all made worse by the pandemic.”) The FDA, in a show of support for self-testing, recently changed its policy to support at-home diagnostic testing for infectious disease management. Expand telehealth offerings with remote diagnostics Both patients and physicians like telehealth and want it to continue after the pandemic comes to an end, according to a study conducted by the American Medical Association and the COVID-19 Healthcare Coalition. 79% of patients were “very satisfied” with the care received during their last telehealth visit, and 68% of physicians told researchers they were motivated to increase the use of telehealth in their practice. By expanding telehealth with remote diagnostics, providers would be able to see patients and users as part of a comprehensive telehealth cycle. Telehealth, now limited by in-person lab testing, could become the healthcare of choice, especially for chronic illnesses suffered or those with otherwise limited care options.
<urn:uuid:525dfdac-1a52-43de-829c-354a79fc7874>
CC-MAIN-2022-40
https://coruzant.com/health-tech/build-upon-the-post-covid-telehealth-boom-with-at-home-diagnostics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00550.warc.gz
en
0.953286
862
2.671875
3
What is LDAP Injection? Many companies use LDAP services. LDAP serves as a repository for user authentication, and also enables a single sign-on (SSO) environment. LDAP is most commonly used for privilege management, resource management, and access control. LDAP Injection attacks are similar to SQL Injection attacks. These attacks abuse the parameters used in an LDAP query. In most cases, the application does not filter parameters correctly. This could lead to a vulnerable environment in which the hacker can inject malicious code. LDAP exploits can result in exposure and theft of sensitive data. Advanced LDAP Injection techniques can also execute arbitrary commands. This lets them obtain unauthorized permissions and also alter LDAP tree information. Environments that are most vulnerable to LDAP Injection attacks include ADAM and OpenLDAP. In this article, you will learn: - What is LDAP Injection? - How Do LDAP Injection Attacks Work? - Types of LDAP Injections - LDAP Injection Examples Using Logical Operators - BLIND LDAP Injections - How to Prevent LDAP Vulnerabilities How Do LDAP Injection Attacks Work? Clients query an LDAP server by sending a request for a directory entry that matches a specific filter. If an entry matching the LDAP search filter is found, the server returns the requested information. Search filters used in LDAP queries follow the syntax specified in RFC 4515. Filters are constructed based on one or more LDAP attributes specified as key/value pairs in parentheses. Filters can be combined using logical and comparison operators and can contain wildcards. Here are some examples: (cn=David*)matches anything with a common name beginning with the string David(the asterisk matches any character). (!(cn=David*))matches anything where the common name does not start with the string (&(cn=D*)(cn=*Smith))uses the AND logical operator, represented by the &symbol. Matches entries that start with the letter D and end with Smith. (|(cn=David*)(cn=Elisa*))uses the OR logical operator, represented by the pipe symbol. Matches entries whose common name starts with one of the strings Dave or Elisa. Similar to SQL injection and related code injection attacks, an LDAP injection vulnerability results when an application injects unfiltered user input directly into an LDAP statement. An attacker can use LDAP filter syntax to pass a string value, which will cause the LDAP server to execute various queries and other LDAP statements. Typically the injected command will exploit misconfiguration or inappropriate permissions set on the LDAP server. Types of LDAP Injection Attacks Access Control Bypass All login pages have two text box fields. One for the username, one for the password. The user inputs are USER(Uname) and PASSWORD(Pwd). A client supplies the user/password pair. To confirm the existence of this pair, LDAP constructs search filters and sends them to the LDAP server. An attacker can enter a valid username (john90 for example) while also injecting the correct sequence after the name. This way they successfully bypass the password check. By knowing the username any string can be introduced as the Pwd value. Then the following query gets sent to the server: The LDAP server processes only the first filter. The query processes only the (&(USER=john90)(&) query. Since this query is always correct, the attacker enters the system without the proper password. Elevation of Privileges Some queries list all documents and they’re visible to users that have a low-security level. As an example /Information/Reports, /Information/UpcomingProjects, etc. files in the directory. The “Information” part is the user entry for the first parameter. All these documents have a “Low” security level. The “Low” part is the value for the second parameter. This also allows the hacker to access high-security levels. In order to do that the hacker must use an injection that looks something like this: This injection results in this filter: If you’ve been paying attention you know the LDAP processes the first filter. The second filter gets ignored. The query that gets processed is (&(directory=Information)security level=*)). The (&(directory=Information)(security level=low)) is ignored completely. That’s how hackers see a list of documents that can usually only be accessed by users with all security levels. Even though the hacker doesn’t actually have privileges to see this information. Some resource explorers let a user know exactly which resource is available in the system. For example, a website dedicated to selling clothing. The user can look for a specific shirt or pants and see if they are available for sale. In this situation OR LDAP Injections are used: Both Resource1 and Resource2 show the kinds of resources in the system. Resource1=Jeans and Resource2=T-Shirts show all the jeans and T-Shirts that are available for purchase in the system. How do hackers exploit this? By injecting (uid=*) into Resource1=Jeans. This query then gets sent to the server: The LDAP server then shows all the jeans and user objects. LDAP Injection Examples Using Logical Operators An LDAP filter can be used to make a query that’s missing a logic operator (OR and AND). An injection like: Results in two filters (the second gets ignored while the first one gets executed in OpenLDAP implementations): ADAM LDAP doesn’t allow queries with two filters. This renders this injection useless. Then we have the & and | standalone symbols. Making queries with them looks something like this: Filters that have the OR or AND logic operators can make queries in which this injection: Results in this filter: As you can see, this filter isn’t even syntactically correct. Yet, OpenLDAP will process it regardless. It will go from left to right and ignore all characters after the first filter closes. What does that entail? Certain LDAP Client components ignore the second filter. The first complete one is sent to ADAM and OpenLDAP. That’s how injections bypass security. In cases where applications have a framework that checks the filter, it needs to be correct. An example of a synthetically correct injection looks something like: This shows two different filters where the second one gets ignored: Since certain LDAP Servers ignore the second filter, some components don’t allow LDAP queries with two filters. Attackers then create special injections to obtain an LDAP query with a single filter. Now an injection like: Results in this filter: How do attackers test an application to see if it’s vulnerable to code injections? They send a query to the server that generates an invalid input. If a server returns an error message, it means the server executed his query. Meaning code injection techniques are possible. Read more to find out about AND and OR injection environments. AND LDAP Injection In this case, the application constructs a query with the “&” operator. This together with one or more parameters that are introduced by the user is used to search in the LDAP directory. The search uses value1 and value2 as values that let the search in the LDAP directory happen. Hackers can maintain a correct filter construction while also injecting their malicious code. This is how they abuse the query to pursue their own objectives. OR LDAP Injection There are cases where the application makes a normal query with the (|) operator. Together with one or more parameters that the user introduces. An example looks something like this: As before, value1 and value2 are used for the search. BLIND LDAP Injections Hackers can deduce a lot of things just from a server’s response. The application itself doesn’t show any error messages. Yet, the code that’s injected into the LDAP filter will generate a valid response or an error. A true result or a false result. Attackers exploit this behavior to obtain answers to true or false questions from the server. We call these techniques Blind Attacks. Even though blind LDAP Injection attacks aren’t as fast as classic ones, they are easy to implement. Why? Because they work on binary logic. Hackers use blind LDAP Injections to obtain sensitive information from the LDAP Directory. AND Blind LDAP Injection Imagine an online shop that can list all Puma shirts from an LDAP directory. But the error messages are not returned. This LDAP search filter gets sent: Any available Puma shirts are shown to the user as icons. If there are no Puma shirts available, the user won’t see any icons. This is where Blind LDAP Injection comes into play. “*)objectClass=*))(&(objectClass=void” is injected and now the application constructs an LDAP query that looks like: The server process only the (&(objectClass=*)(objectClass=*)) part of the LDAP filter. Now the shirt icon shows to the client. How so? The objectClass=* filter always returns an object. An icon showing means the response is true. Otherwise the response is false. The hackers now have the option of using blind injection techniques in many ways. An example of an injection: Different objectClass values can be deduced with the help of these injections. If even a single shirt icon is shown, the objectClass value exists. Otherwise, the objectClass doesn’t exist. A hacker can obtain all sorts of information by using TRUE/FALSE questions via Blind LDAP injections. OR Blind LDAP Injection Injection in an OR environment looks like this: This LDAP query doesn’t obtain any objects from the LDAP directory service. The shirt icon doesn’t get shown to the client, making it a FALSE response. If an icon is shown it is a TRUE response. In order to gather information the hacker will inject an LDAP filter like this one: It’s the same thing as with the AND Blind Injection. Keep reading to see how you can protect yourself against LDAP vulnerabilities! How to Prevent LDAP Vulnerabilities Unfortunately, firewalls and intrusion detection mechanisms will not help here as all of these attacks occur in the application layer. Your best option is to use minimum exposure points and minimum privileges principles. Sanitize Inputs and Check Variables The most effective way of preventing LDAP Injection attacks is to sanitize and check variables. As variables are the building block of LDAP filters, hackers use special characters in parameters to create malicious injections. AND “&”, OR “|”, NOT “!”, =, >=, <=, ~= are all operators that need to be filtered at the application layer to ensure they’re not used in Injection attacks. All values which make the LDAP filter should be checked against a list of valid values in the Application Layer before the LDAP receives the query. Don’t Construct Filters by Concatenating Strings Avoid creating LDAP search filters by concatenating strings, if the string contains a user input. Instead, you should create the filter programmatically using the functionality provided by the LDAP library. For example, in the Java UnboundID LDAP SDK, use this code to concatenate two strings provided by the user using an AND operator: Filter filter = Filter.createANDFilter( Creating an LDAP filter programmatically prevents malicious input from generating filter types that are different than expected. If the LDAP library you’re using doesn’t provide a way to programmatically create search filters, it is strongly recommended to replace it. Use Access Control on the LDAP Server To add another layer of protection, follow the principle of least privilege, and make sure that each account only has permission to perform operations needed for the user’s role. For example, if you want your application to be able to search for items by looking for uid and mail attributes, only give the account permission to post searches for these attributes. Accounts should be granted read access to properties they need to retrieve but not modify. Before granting write access, make sure the application really needs to modify those properties. If an application needs to process tasks on behalf of other users, you can use a proxied authentication request to ensure these tasks are handled according to the other user’s access control rights. Restrict User Requests in Other Ways Search filters are just one of the elements of an LDAP search request. You can work with other elements to reduce the risk of a malicious user request: - Set the base DN and scope of the LDAP server to match as closely as possible to the type of search being performed. For example, if you want to search for users, and they are in a specific branch of the directory, restrict the search to that branch. - Use a size limit to prevent the server from returning more items than expected. For example, when searching for individual user entries, if the search returns more than one entry, this will result in an error. - Use timeouts to make sure your server doesn’t spend too much time processing searches. For most searches, a timeout of 1-2 seconds is sufficient. Set a timeout that is appropriate given your current search behavior and server performance, and you can avoid malicious searches querying large amounts of data, which may take more time. Dynamic Application Security Testing Dynamic Application Security Testing (DAST) can be used to automatically detect LDAP injection vulnerabilities. Bright enables organizations to automate black-box testing for a long list of vulnerabilities for both applications and APIs. The various types of LDAP injection represent one of the vulnerabilities DAST solutions test for. As noted above it is crucial to detect LDAP vulnerabilities in the development process so they can be remediated early in the SDLC and not leave the organization exposed in production. Bright’s ability to be run in an automated way as part of CI/CD ensures developers can detect these vulnerabilities early and remediate them before there is risk for the organization.
<urn:uuid:2a5b673a-2a83-45b2-bd9f-57216c9fb274>
CC-MAIN-2022-40
https://brightsec.com/blog/ldap-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00550.warc.gz
en
0.838233
3,515
3.546875
4
Bonus Material: Financial Risk Assessment Template Risk Management in Banking: Introduction The Great Depression spawned the most ambitious legislative program ever attempted by the United States: The New Deal. The New Deal created an environment where the federal government accepted responsibility for a variety of issues originally left to individuals, states, and city governments. This unprecedented increase in federal initiatives resulted in the enactment of a myriad of bureaucratic agencies. These various agencies were all denominated by their titles’ acronyms, creating notorious confusion around who was in charge of what. This led U.S. citizens to use the term “alphabet soup” when describing these regulatory bodies. Alphabet soup has become a widely used metaphor to refer to an abundance of incomprehensible language containing abbreviations. Most organizations that operate today must adhere to the rules of various regulating bodies. Some industries, however, are required to adhere to more than others – like the banking industry. Banks are highly regulated in order to promote financial stability, foster competition, and protect consumers. Due to the strictly monitored environment in which banks operate, it’s critical that they have strategies in place to keep all their ducks in a row. Risk management is an essential piece of banking operations. To demonstrate why, this guide will provide an overview of risk management in banking, discuss specifically the types of risk management in commercial banks, detail risk management practices in banks, go over the process of risk management in banks, and explain how to use enterprise risk management software for banks. Risk Management in Banking Overview Just like any business, banks face a myriad of risks. However, given how important the banking sector is and the government’s stake in keeping risks in check, the risks weigh heavier than they do on most other industries. There are various types of risks that a bank may face and is important to understand how banks manage risk. Types of Risk Management in Commercial Banks Banking Risk Type #1: Credit Risk Banks often lend out money. The chance that a loan recipient does not pay back that money can be measured as credit risk. This can result in an interruption of cash flows, increased costs for collection, and more. Banking Risk Type #2: Market Risk This refers to the risk of an investment decreasing in value as a result of market factors (such as a recession). Sometimes this is referred to as “systematic risk.” Banking Risk Type #3: Operational Risk These are potential sources of losses that result from any sort of operational event; e.g. poorly-trained employees, a technological breakdown, or theft of information. Banking Risk Type #4: Reputational Risk Let’s say a news story breaks about a bank having corruption in leadership. This may damage their customer relationships, cause a drop in share price, give competitors an advantage, and more. Banking Risk Type #5: Liquidity Risk With any financial institution, there is always the risk that they are unable to pay back its liabilities in a timely manner because of unexpected claims or an obligation to sell long-term assets at an undervalued price. Risk Management Practices in Banks Banks must prioritize risk management in order to stay on top (and ahead) of the various critical risks they face every day. Risk management in banks also goes far beyond compliance, as banks must be on the lookout for strategic, operational, price, liquidity, and reputational risk. Staying on top of these risks demands a powerful and flexible bank risk management program. The number of individual regulatory changes that financial institutions and banks must track on a global scale has more than tripled since 2011. There are millions of proposed rules and enforcement actions across multiple jurisdictions that organizations must follow. This requires regulatory change management to be a prominent practice within any bank’s risk management program. Regulatory change management can be described in the simplest terms as “managing regulatory, policy and or procedures applicable to your organization for your industry.” Regulatory compliance can be a burdensome and costly task for financial institutions, so it is critical that organizations have the appropriate processes in place to identify changes to existing regulations as well as new regulations that impact the ability of the organization to achieve objectives. It is equally important that organizations are informed of any potential consequences or fines should they not meet the regulation. Once a regulatory change has been made, it is essential for organizations to assess how they will implement the respective changes to their current policies, processes, and training sessions. As changes are implemented, organizations should begin tracking compliance with the updated regulation going forward. Risk Management Process in Banking Industry Having a clear, formalized risk management plan brings additional visibility into consideration. Standardizing risk management makes identifying systemic issues that affect the entire bank simple. The ideal risk management plan for a bank serves as a roadmap for improving performance by revealing key dependencies and control effectiveness. With proper implementation of a plan, banks ultimately should be able to better allocate time and resources towards what matters most. Size, brand, market share, and many more characteristics all will prescribe a bank’s risk management program. That being said, all plans should be standardized, meaningful, and actionable. The same process for defining the steps within your risk management plan can be applied across the board: Risk Identification In Banks Banks must create a risk identification process across the organization in order to develop a meaningful risk management program. Note that it’s not enough to simply identify what happened; the most effective risk identification techniques focus on root cause. This allows for identification of systemic issues so that controls can be designed to eliminate the cost and time of duplicate effort. Assessment & Analysis Methodology Assessing risk in a uniform fashion is the hallmark of a healthy risk management system. It’s important to be able to collect and analyze data to determine the likelihood of any given risk and subsequently prioritize remediation efforts. Risk mitigation is defined as the process of reducing risk exposure and minimizing the likelihood of an incident. Top risks and concerns need to be continually addressed to ensure the bank is fully protected. Monitoring risk should be an ongoing and proactive process. It involves testing, metric collection, and incidents remediation to certify that the controls are effective. It also allows for addressing emerging trends to determine whether or not progress is being made on various initiatives. Creating relationships between risks, business units, mitigation activities, and more paints a cohesive picture of the bank. This allows for recognition of upstream and downstream dependencies, identification of systemic risks, and design of centralized controls. Eliminating silos eliminates the chances of missing critical pieces of information. Presenting information about how the risk management program is going – in a clear and engaging way – demonstrates effectiveness and can rally the support of various stakeholders at the bank. Develop a risk report that centralizes information and gives a dynamic view of the bank’s risk profile. ERM Software for Banks The best way to begin the process of developing a sound banking risk management plan is by using enterprise risk management software. At LogicManager, we transform how you think about risk. Our platform is designed to alleviate the pain points in your bank’s ERM processes so that you can focus on aligning and achieving operational and strategic goals. LogicManager’s risk management software for banks and expert advisory services provide a risk-based framework and methodology to accomplish all of your governance activities, while simultaneously revealing the connections between those activities and the goals they impact. Risk Management in Banking Conclusion Whether you are managing risks defined by the OCC, CFPB, FDIC, or any of the other many regulatory agencies, it’s important to think of risk management as helping you accomplish more than just compliance. LogicManager’s solutions are designed to meet the needs of your unique and dynamic industry.
<urn:uuid:ac960473-6059-41b1-85d5-bf3ea5886c82>
CC-MAIN-2022-40
https://www.logicmanager.com/resources/erm/risk-management-in-banking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00550.warc.gz
en
0.93811
1,618
2.921875
3
Last update:2022-04-14 10:04:35 HLS (HTTP Live Streaming) is an HTTP-based streaming media network transmission protocol proposed by Apple. The working principle of HLS is to divide the entire stream into small HTTP-based files for download, and the process of rectifying and dividing the files introduces a large latency. In addition, in order to ensure the smoothness of video playback, the player has a limitation on the minimum number of files required to start playing, and this limitation further expands the latency of HLS. Under the current trend of pursuing low-latency live broadcast, the HLS protocol needs to have the ability to adapt to the trend. The LHLS (Low Latency HLS) protocol, as the name suggests, is a protocol that aims to reduce the latency of the HLS protocol. The HLS player does not need to be modified and can directly support the LHLS protocol to reduce the latency of the HLS protocol, shorten the distance between end users and streamers, and improve the user experience. All live streaming customers who is looking for low latency for HLS delivery. The working principle of LHLS (Low Latency HLS) is to respond the segment data of the server to the client in advance, thereby reducing the latency of HLS. Notes: Client needs to carry a specific parameter in the request URL to trigger LHLS. The parameter is configurable.
<urn:uuid:b38b6f3b-4cd6-45c8-a3d7-37e6bd032d88>
CC-MAIN-2022-40
https://documents.cdnetworks.com/document/media-roadcast/lhls
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00550.warc.gz
en
0.90038
303
2.5625
3
New research shows that overdoing it on sugar might harm the livers of even in healthy men. British researchers found that a sugar-rich diet associated with unhealthily high levels of fat in both the blood and liver. Lead researcher Bruce Griffin from the University of Surrey, said, consuming high amounts of sugar can alter fat metabolism in ways that could increase risk of cardiovascular disease. This study offers yet another valid reason to cut back on sugar. In addition to piling on the empty calories, sugar creates more metabolic work for the liver. In the study, Griffin’s team tracked the liver health of a group of middle-aged men with either high (11 men) or low (14 men) levels of fat in their liver. non-alcoholic fatty liver disease Excess fat accumulation in the liver considered unhealthy. The men with high fat levels already had a condition known as non-alcoholic fatty liver disease (NAFLD). According to the American Liver Foundation, NAFLD is tied to obesity and affects up to one-quarter of Americans. The investigators found that the participants with NAFLD who followed the high-sugar diet developed changes in their fat metabolism. The processes body breaks down fats in the blood and uses them for energy. Those changes linked to a greater risk for heart disease, heart attacks and stroke. But, similar changes also noted in the livers of otherwise healthy men who had a low level of liver fat. In this group, men developed higher levels of fat in the liver after switching to the high-sugar diet. The researchers also found changes in their fat metabolism that similar to the men who already had NAFLD. Our findings provide new evidence that consuming high amounts of sugar can alter your fat metabolism in ways that could increase your risk of cardiovascular disease, Griffin said. The research findings particularly concerning, since the prevalence of NAFLD is on the rise among children as well as adults. More information: [Clinical Science]
<urn:uuid:80ac9d0f-b7e7-49cb-bf40-16ade5b27684>
CC-MAIN-2022-40
https://areflect.com/2017/10/08/sugar-rich-diet-alter-increased-risk-of-developing-cardiovascular-disease-even-in-healthy-men/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00550.warc.gz
en
0.942206
415
2.90625
3
International Pronouns Day, which falls on the third Wednesday of October each year, aims to make it an everyday occurrence for people to educate themselves about, and respect, people’s personal pronouns. While for many cis-gendered people pronouns will be “he/him” or “she/her” depending on which gender you were born – but this is not the case for everyone. Some people do not identify with their gender assigned at birth, so using preferred pronouns can be especially important for trans or non-binary people to feel accepted for who they are. At Computer Weekly’s annual diversity in tech event, supported by Spinks, several keynote speakers highlighted ways people can support under-represented groups in tech. This list of speakers included Alfredo Carpineti, founder and chair of Pride in STEM, who said: “We are all born in the same world that has imposed biases of certain people being better than others, and we need to challenge those within ourselves first.” So why is it helpful, for example, to add your pronouns to email signatures and the like, even if you’re cis-gendered? Carpineti pointed out: “Pronouns in emails make us know that you’re an ally, that you’re learning.” It’s been widely discussed that promoting an inclusive culture in workplaces makes for more innovative teams, and therefore more profitable businesses. Developing that inclusive culture means creating a work environment where everyone feels they are safe to show up to work and be themselves, that they are not going to be judged for it, abused because of it, or made to feel uncomfortable being who they are. Using people’s preferred pronouns can be a way to do this, and as Pips Bunce, director and head of global markets, core engineering and integration components at Credit Suisse, explained at Computer Weekly’s event: “It’s about respecting someone’s identity, it’s about respecting their heritage and respecting who they are and what they are.” Having this respect applies to other aspects of people’s lives as well – when it comes to the Black Lives Matter (BLM) movement, which has been thrust into the public eye as a result of the murder of George Floyd in the US, you don’t have to be black to be an ally of the cause. “Someone from outside a given group showing their support and allyship I think is even more powerful,” Bunce pointed out. “They don’t have to be from that group to care.” Some people might even become part of a minority group as time goes on, said Ashanti Bentil-Dhue, co-founder of Diversity Ally, who pointed out some people may become mothers later in life, or who were born able-bodied but later become disabled. Bentil-Dhue claimed everyone has a “responsibility” to learn about and support others “regardless of how we self-identify or how other people perceive and view us”. “As human beings we will occupy several identities at different stages in our life at different times,” Bentil-Dhue said. “That is why it is a personal responsibility for all of us to respect each other.” One of the themes of the day at Computer Weekly’s 2020 D&I event was that allies need to be prepared to be uncomfortable when learning about and supporting under-represented groups. “You might not use the right words all the time,” said Edleen John, director of international relations and corporate affairs, and co-partner for equity, diversity and inclusion at the Football Association. “If you don’t know what the right thing is to say, let’s be open and ask.” When it comes to supporting people around you, John pointed out there’s “power in numbers” so if you’re in a majority group, showing support for others means things are more likely to change for the better. Starting with actions such as asking people’s pronouns and adding your own pronouns to email signatures are such a small change for you that can make such a huge difference to someone else and their sense of belonging.
<urn:uuid:b2d1d44e-6d11-4244-b25f-fed8d52a71af>
CC-MAIN-2022-40
https://www.computerweekly.com/blog/WITsend/Why-pronouns-are-important
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00550.warc.gz
en
0.966927
922
2.828125
3
Social platforms such as Quora, Reddit or Stack Overflow are gaining popularity as users can easily exchange information within the social community without any barriers. How and why can blockchain make these platforms more credible? Ralph Tkatchuk finds out. The internet and the rise of social media has significantly changed the way we interact with information. Learning something new today is as easy as unlocking a phone and typing in a query. It has replaced the slow and deliberate research people were forced to do in the past to find answers to their most pressing questions. One of the biggest outcomes of the social platform revolution is the emergence of services dedicated to asking and answering questions from anywhere in the world. These started off as web forums where communities would gather, but quickly morphed into full-fledged services that became hubs for millions of people to ask anything from the most basic questions to complex queries. Even so, their popularity was for some time usurped by search engines like Google, which quickly cornered the search market and rapidly became entrenched. These Q&A platforms have become immensely popular again due to the speed at which information can be exchanged as well as the community-based approach which inspires confidence among users. Even so, there are some signs that the systems created aren’t necessarily ideal for all purposes. However, adding blockchain to the equation could prove to be an intriguing solution. Thanks to the technology’s many benefits—especially in terms of democratization and tokenization—Q&A platforms could be approaching a new renaissance which could deliver more transparent and reliable interactions. GREAT QUESTIONS, OKAY ANSWERS Getting an answer to a question today is easier than it has ever been. Google’s search engine dominance remains unbroken, but instead of simply asking the company’s algorithm, users are increasingly turning to their own online communities for answers. The result is a significant bump in the popularity of crowd-sourced Q&A platforms. Services like Quora, which lets users ask any question, regardless of topic or complexity, allows others to answer with what they believe is the correct answer. These question-and-answer threads are generally not moderated or regulated, though the company does maintain centralized control to enforce its conduct policy and terms. On threads, users can vote in terms of how helpful answers are, with the most popular responses rising to the top. The method seems straightforward enough, but it does highlight one major problem. In many cases, the most popular answer is not the correct one. This makes them less valuable as information centers than Google, which generally offers high-value and accurate answers near the top of its results. For a long time, this informational disparity meant that Quora and its contemporaries were seen as lesser than Google and other more “reliable” information sources. In the years since, websites like Stack Overflow, the academic hub for Q&A forums, have returned much of the legitimacy to the sector, but the challenge remains an issue of perception. Even on Quora itself, there are doubts about informational accuracy. Worse still, policies in place for “self-policing” of factually incorrect answers is rarely taken seriously or even carried out. Most importantly, however, the problem stems from the fact that there is no incentive to provide correct answers, instead, just to be the most visible answer. BLOCKCHAIN REBOOTS THE MODEL Nevertheless, the social Q&A model holds enormous promise. What was missing, until now, was a tangible way to incentivize better behavior and more reliable responses. With blockchain, these platforms may finally have an answer. One of the technology’s biggest benefits is the ability to create application-specific tokens that have real-world value. When users are not incentivized to provide accurate information, but rather popular answers, there is no real way to prioritize correct answers. Instead, those users who understand how to game the system tend to have their answers lifted to the top spot, while information is left behind. Blockchain-based tokens can be used for a variety of needs in an ecosystem. In the case of Q&A platforms, however, they create a clear incentive to offer better information, instead of simply looking for financial gain from external sources. ASKfm, a popular Q&A platform, is making such a leap with its upcoming ICO. The company is already one of the largest platforms in the world, with millions of users and a proven ecosystem. However, by adding blockchain, ASKfm aims to create a smart-contract-based system that lets users decide whether the information they’re receiving is accurate and satisfactory. Users will be able to vote with their tokens and incentivize better responders to participate within the community. Others are taking a more democratic approach, like blockchain-based Q&A service Tip. The company will award tokens to users who have the most upvotes on their answers, with the hope that this will incentivize the best available information to rise to the top. Another important aspect of blockchain is its power to disintermediate and democratize. Currently, Quora, Reddit, and even the popular Stack Overflow, are aimed at rewarding one group of stakeholders—investors. Despite the vibrant communities they’ve developed, most of the decisions made relate to the entrenched interests of those individuals at the top. This creates serious conflicts of interest and questions of censorship. Reddit, one of the most popular sources of information on the web, has faced serious pushback because of this, and it has damaged its credibility among users. With blockchain, these problems become obsolete, as centralized authorities have significantly less influence over networks that are distributed and peer-to-peer. Power in this case is returned to the community, which can more freely interact, and rest assured there are no external interests involved in their interactions. More importantly, it limits the influence marketers, advertisers, and other similar actors have on networks alongside their ability to rig answers and game the system. A MORE DEMOCRATIC DEBATE Q&A platforms are not going anywhere, especially as their popularity continues to gain traction. However, for them to truly reach their potential as democratic sources of information, they must clean up their act. By embracing blockchain, the sector is taking a very real first step towards forming a model that incentivizes better, has more factual answers while proactively working to block out those actors that distort information for personal gains.
<urn:uuid:c7cb9f10-6989-48d7-ab40-673fccf456ed>
CC-MAIN-2022-40
https://dataconomy.com/2018/07/blockchain-can-incentivize-the-social-platform-revolution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00750.warc.gz
en
0.962486
1,319
2.546875
3
For nearly twenty years most software applications are web based. From web sites to enterprise software systems, applications comprise a web tier and enables access to functionalities via a web browser. Web application developers are therefore offered with a rich set of tools and techniques for accomplishing their development tasks. In most cases applications developers make use of web application development frameworks, which speed up their development tasks and ease the production of well-structured and less error prone applications. For example, they accelerate development through enabling developers to configure part of the application rather than code it from scratch. Furthermore, they provide readily available support for popular programming patterns like the MVC (Model View Controller). Numerous web application development frameworks have emerged in the last two decades. They support different programming languages and platforms and have been advancing in line with the evolution of the architecture and the programming patterns of web applications. Given the wide variety of available options it’s important for developers to understand the merit and need for the various frameworks, along with their pros and cons. In following paragraphs, we present seven of the most popular frameworks, which are widely used by developers as we speak. Angular is one of the most popular frameworks for web application development, which is used by giant companies like Google, Microsoft, and Paypal. The main value proposition of this framework lies in its flexibility and the rich set of functionalities that it offers. For example, it provides handy programming features like dependency injection, while support Typescript that makes it appropriate for large scale applications. Likewise, it provides template syntax and command-line support for quick prototyping. Developers take advantage of Angular in order to improve their productivity and speed of application development, both for enterprise-scale solutions and for conventional web applications with dynamic content. Django is a nearly fifteen years old framework for web development, which supports many thousands of web projects. It is still one of the most popular frameworks, thanks to its ability to support novel solutions to web application development problems and thanks to the fact that it is constantly improving. It is based on Python, a programming language that it has a growing popularity and very good momentum. Django is flexible, scalable and applicable to almost all web development problems. Indeed, it is used for the development of small-scale applications and proofs-of-concept, but also in large scale applications alike. Over the years, its community has grown significantly, and its documentation has greatly improved. That’s why it is still one of the top choices when it comes to developing web applications. The Ruby language emerged more than fifteen years ago, as a prominent way for developing web applications. Its penetration in web sites is currently impressive. Based on the Ruby language, the Ruby on Rails framework remains a popular option for web application development. The framework is known for its ability to solve complex development issues and comes with a rich set of libraries that boost developers’ productivity. It also provides excellent test automation features, which lead to good code quality. Ruby on Rails is used by major web sites and marketplaces like Airbnb and Groupon. In addition to the above-listed frameworks, there are many more notable mentions like Symfony and React. The available frameworks cover all tastes and needs, through support for different programming languages, development platforms, and programming styles. There is certainly not a one size fits all solution. Developers are in most cases free to make their own choice. Nevertheless, one should keep in mind that all these frameworks have a quite significant learning curve. Hence, prior to investing in one of them, developers must make sure that the selected framework will serve their needs across a series of projects. Overall, we are living in exciting times with a lot of freedom and plenty of choices. Web application frameworks empower developers in this direction. Software application development: What’s trending? Mean Stack: Why go Mean? 6 Questions to Ask Before Designing a Mobile Friendly Website When is the Correct Time to Hire a Web Design Agency Discover Why WordPress is the Best Choice for Websites Throughout 2014 Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:18c1cb55-14cc-47cf-9db8-0b00d659db31>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/seven-popular-web-application-development-frameworks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00750.warc.gz
en
0.937019
1,541
2.515625
3
February 3, 2020 | Written by: Barbara Jones and Mariya Vyushkova Categorized: Quantum Computing Share this post: To build cheaper and more efficient sustainable energy options, we need to know a lot more than we currently do about the chemical reactions that convert solar energy into electricity. One of the best ways to do that is through computer models that simulate complex molecular interactions. Although classical computers have served this purpose well over the past few decades, we explain in a new research study the special qualities of quantum computing that will help researchers advance technologies for solar energy conversion, artificial photosynthesis and photovoltaics to an entirely new level. Our study, Simulation of Quantum Beats in Radical Pairs on a Noisy Quantum Computer, details how IBM Research and the University of Notre Dame scientists—with help from students at Georgetown University, DePaul University, Illinois Institute of Technology and Occidental College in Los Angeles—used a cloud-based IBM Quantum computer to simulate how a chemical reaction outcome is controlled by the time evolution of the entangled state of the two reactants, and how this spin chemistry phenomenon is affected by the gradual loss of magnetization and dephasing caused by thermal fluctuations. The “Almaden” quantum computer represents one the latest generations of IBM’s 20-qubit systems. This is an illustration of the processor’s qubit configuration and connectivity. Spin chemistry is a subfield of chemistry that deals with magnetic spin effects in chemical reactions. It connects quantum phenomena such as superposition and entanglement to tangible chemistry parameters such as reaction yield (the amount of whatever a chemical reaction produces). With a quantum computer, spin chemistry allows us to directly simulate some dynamic chemical processes, basically the kinetics of chemical reactions. Spin effects in radical pairs play an important role in processes underlying solar energy conversion. Notre Dame researchers had for years used classical computers to study spin chemistry. Simulations created using those computers, however, required the introduction of artificial noise to try to realistically mimic chemical reactions. In 2018, the researchers jumped at the chance to create more detailed spin chemistry simulations using IBM’s publicly available 5-qubit quantum computers. And by April 2019, Notre Dame had joined the IBM Quantum Network, which offered them access to IBM Quantum computing systems and expertise they sought to carry out their spin chemistry experiments. Working together, our team of scientists used a quantum computer to simulate how spin effects control the reaction yield. In this case, two possible reaction products were molecules in two different types of excited states – either singlet (with spin 0) or triplet (with spin 1), with each containing different amounts of energy. In the system we studied, experimental data published by V.A. Bagryansky’s group—of the V.V. Voevodsky Institute of Chemical Kinetics and Combustion—is expressed in fluorescence or phosphorescence, which helps us better understand how a reaction works on a molecular level. In this system, the molecules’ signal loss was measured using fluorescence. FIG. 1: Vector diagram representing singlet-to-tripletoscillations in a radical pair in strong magnetic field The molecules’ loss of magnetization due to electron spin relaxation was analogous to magnetic tape losing its ability to store information due to excessive heat. Magnetic media—largely replaced by flash, but still used for archival storage—is made of islands of magnetic material. For a long time, magnetic media manufacturers struggled with their equipment running at room temperature or hotter because heat weakened the magnetic signals over time. Fast electron spin relaxation can likewise diminish the efficiency of spin transport in solar energy conversion applications. Our experiment’s success was a two-way street, enabling us to study quantum computer behavior as well as spin chemistry. Unlike most experiments on quantum computers, which look to leverage the technology’s incredible potential by taking advantage of the short lives of qubits—measurable in microseconds—we sought to slow down the calculations sent to our two-qubit circuits. That enabled us to look in detail at what the gates and qubits were doing over many seconds and even minutes. Normally in quantum computing, someone submits a program, it runs, measurements are made, and the program stops. Instead, we used OpenPulse, a programming language within the Qiskit open-source quantum-computing framework, to specify pulse-level control on the quantum device. We slowed down the calculations so we could see the quantum computer’s noise processes. Noise is a natural property of qubits, but limits the number of calculations they can perform and introduces errors to the final results. As we continue our work in this area, we will be able to contribute to the knowledge of those studying how to mitigate such noise and create more robust and less error-prone quantum computers in the future. Our research serves as a new use case for quantum computing. We showed that qubit noise, typically an impediment to quantum computer use, can actually be an advantage over a classical computer for chemical simulations. Looking ahead, we hope that OpenPulse will become more of a tool to engineer noise and change quantum signals. The greater control OpenPulse can offer, the better future experiments can simulate—and use—noise to better understand complex chemical phenomena such as artificial photosynthesis and solar energy conversion. Dr. Vyushkova is currently a visiting scientist at IBM Research-Almaden. Brian Rost, Barbara Jones, Mariya Vyushkova, Aaila Ali, Charlotte Cullip, Alexander Vyushkov, and Jarek Nabrzyski. Noisy Simulation of Quantum Beats in Radical Pairs on a Quantum Computer. arXiv (Submitted 3 Jan 2020). https://arxiv.org/abs/2001.00794 Quantum starts here
<urn:uuid:96ecb1aa-7c35-4582-8333-ac8f9e1021ee>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2020/02/quantum-spin-chemistry/?social_post=3093759543&linkId=81675682
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00750.warc.gz
en
0.908214
1,211
3.96875
4
Microsoft® Office Excel® 2011: Level 2 (Macintosh) Course SpecificationsCourse number: 084556 Course length: 1.0 day(s) Course DescriptionCourse Objective: You will use advanced formulas and work with various tools to analyze data in spreadsheets. You will also organize table data, present data as charts, and enhance the look and appeal of workbooks by adding graphical objects. Target Student: This course is meant for those desiring to gain advanced skill sets necessary for calculating data using functions and formulas, sorting and filtering data, using PivotTables and PivotCharts for analyzing data, and customizing Prerequisites: Before starting this course, students are recommended to take the following Element K course or have equivalent knowledge: Microsoft® Office Excel® 2011: Level Hardware RequirementsFor this course, you will need one computer for each student and the instructor. Each computer should have the following hardware configuration: - Intel only. - 1 GB of RAM or more. - 2.5 GB of available hard disk space. HFS +, also known as Mac OS Extended format. - DVD drive. - A keyboard and mouse or other pointing device. - 1280 x 800 pixel or higher resolution is recommended. - Network cards and cabling for local network access. - Internet access (contact your local network administrator). - A printer (optional) or an installed printer driver. - A projection system to display the instructor’s computer Each computer requires the following software: - Microsoft® Office:mac (Home & Student, Home & Business) Edition 2011. Course ObjectivesUpon successful completion of this course, students will be able to: - use advanced formulas. - organize worksheet and table data using various techniques. - create and modify charts. - insert and modify graphic objects in a worksheet. - customize and enhance workbooks and the Microsoft Office - Lesson 1: Calculating Data with Advanced Formulas - Topic 1A: Apply Cell and Range Names - Topic 1B: Calculate Data Across Worksheets - Topic 1C: Use Specialized Functions - Topic 1D: Analyze Data with Logical and Lookup Functions - Lesson 2: Organizing Worksheet and Table Data - Topic 2A: Create and Modify Tables - Topic 2B: Format Tables - Topic 2C: Sort or Filter Data - Topic 2D: Use Functions to Calculate Data - Topic 2E: Create a PivotTable Report - Lesson 3: Presenting Data Using Charts - Topic 3A: Create a Chart - Topic 3B: Modify Charts - Topic 3C: Format Charts - Lesson 4: Inserting Graphic Objects - Topic 4A: Insert and Modify Pictures and Clip Art - Topic 4B: Draw and Modify Shapes - Topic 4C: Illustrate Workflow Using SmartArt Graphics - Topic 4D: Layer and Group Graphic Objects - Lesson 5: Customizing and Enhancing the Excel Environment - Topic 5A: Customize the Excel Environment - Topic 5B: Customize Workbooks - Topic 5C: Manage Themes - Topic 5D: Create and Use Templates
<urn:uuid:984fcf04-d8e9-4e1d-b5da-9ef348b1af0a>
CC-MAIN-2022-40
https://store.logicaloperations.com/coursefiles/outlines/new/84-556.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00750.warc.gz
en
0.717499
715
3.03125
3
What is end-to-end encryption? End-to-end encryption (E2EE) is a system of communication where only the communicating users can read the messages. This kind of encryption prevents any third party from eavesdropping on your data keeping it safe and private at all times. This is not the case with many other providers including Dropbox, Google Drive, Microsoft OneDrive and many others. For a detailed technical definition of E2EE, lookup on Wikipedia.
<urn:uuid:950ab4de-1dde-4a33-aa53-e1a56d71f79f>
CC-MAIN-2022-40
https://dropsecure.com/help/faq/security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00750.warc.gz
en
0.917639
95
2.796875
3
The much-publicised Democratic National Committee (DNC) email hack that resulted in nearly 20,000 confidential email messages leaked to WikiLeaks takes place amid a highly politized post-election environment. The US government has been quick to point fingers and label the attack a state-sponsored cyber-attack orchestrated by the Russian government, however, culpability and politics are not the focus of this blog article. As expected, this news story has reignited the conversation about email and online security, raising the following question: are my emails safe from prying eyes? Although most business and personal email correspondence do not contain the same level of sensitive information routinely handled by governments or political parties, it is imperative for users to know if their inboxes and/or messages are at risk and if so, how to mitigate those risks. Are Your Emails at Risk? If you or your organization rely on email to transmit sensitive or confidential information, you should not be solely relying on password protection for email security. Instead, you or your organization’s IT department should ensure your email provider or servers incorporate some level of security or encryption beyond the run of the mill easy to crack password protection. Between 2012 and 2014 a Romanian email hacker nicknamed “Guccifer” hacked into about 100 email accounts including those of former president George W. Bush and former Secretary of State Colin Powell. During an interview with The New York Times, Guccifer confessed he was able to penetrate these email accounts by leveraging rudimentary tactics, namely surfing the web for information about his targets and “guessing the answers to their email security questions”. His tools: basic computer knowledge, an old NEC computer, and a Samsung cellphone. The case described above shows how easy it is for “hackers” to penetrate email accounts and steal confidential information. Furthermore, a report by Google shows that approximately 40% to 50% of emails between Gmail and other email providers are not encrypted; meaning they can be snooped by prying eyes while in transit to their destination. The truth is, when sending a message through an email provider such as Gmail or your corporate email account, the message can easily be intercepted by a malicious actor, not to mention your account can also get hacked. To mitigate the risk of emails and other sensitive information falling into the wrong hands you should be taking a few simple precautions. How to “Bulletproof” Your Emails Email encryption is supported by popular email clients such as Microsoft Outlook, Exchange, and Gmail. Google’s Gmail uses Transport Layer Security or (TLS) to create an encryption “tunnel” between its email servers and everyone else’s. When emails are in this tunnel they can’t be hacked. However, a tunnel has 2 ends, so for a message to be encrypted- both sides of an email exchange need to support encryption. Below are some measures that can be taken to maximize email security. Add an Extra Layer of Encryption To ensure your messages are secure even if the recipient is not using an encrypted email service consider using an inexpensive service such as Start Mail. This service is similar to the email encryption systems used by banks in which the recipient needs to answer a secret question in order to access the message. Is Your Password Good Enough? As for avoiding a “Guccifer” type hack, ensure you don’t re-use the same password for every account and refrain from using a password from the “most common passwords list”. It also goes without saying, but your security question must be very difficult to guess. Avoid Public Wi-Fi Public Wi-Fi networks are a hacker’s dream! Under this scheme, attackers can position themselves between your device and the hotspot connection point. This means your device is not talking directly to the hotspot, instead, hackers can easily penetrate your device and gain access your information. To be safe, you might consider using a VPN, visiting only HTTP, SSL websites and turning off Wi-Fi when not in use. As a rule of thumb, it is also recommended to refrain from online banking and other sensitive transactions when connected to a public Wi-Fi. By following these simple tips, chances are your emails will be secure and will help you or your organization avoid any embarrassments à la DNC. To Discover How Our Incident Management Software Can Protect Your Company, Schedule a Personalized Demo By Clicking on The Button Below.
<urn:uuid:876713f5-9c5a-4c96-a5ac-cf4dec325a00>
CC-MAIN-2022-40
https://d3security.com/blog/protectyouremailfrompryingeyes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00750.warc.gz
en
0.926184
913
2.546875
3
If you need to find your computer's host name or the username, for example, during troubleshooting or when instructed to do so by a Teramind Support Agent, here are some commands you can use. From your Windows Start Menu or from the Windows Search Bar, type cmd.exe and press Enter. When the Command Prompt opens, type the following command(s) and press Enter: - this will return the computer name and the username. For example, - this will return the User Principal Name (UPN). An UPN in Windows Active Directory is the name of a system user that consists of the user name (logon name) and the domain name (UPN suffix). For example: firstname.lastname@example.org. Note that while the UPN looks like an email address it is not one. - will show the display name in case the account is named differently than the user profile account. For example, Click the Launchpad icon in the Dock, type Terminal in the search field, then click Terminal. Or, in the Finder , open the /Applications/Utilities folder, then double-click Terminal. Type the following command(s) from the Terminal prompt and press Enter: - similar to Windows' equivalent command (see above). - this will return the computer's host name such as scutil --get HostName - returns results similar to the hostname command above. scutil --get LocalHostName - this will return the computer host name such as john-macbook-pro.local. A local host name designates a computer on a local subnet. This is mostly used for Bonjour-aware services on the local network. scutil --get ComputerName - will return the computer's name. For example, John's MacBook Pro.
<urn:uuid:da33fd1c-85b7-4439-882f-ab4e864c752b>
CC-MAIN-2022-40
https://kb.teramind.co/hc/en-us/articles/4412963349011-How-to-find-the-Computer-Hostname-and-Username
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00750.warc.gz
en
0.787775
432
2.75
3
All developers building software for the web have heard about Application Programming Interfaces, or, as they are also called, APIs. In this blog, we tell you a little more about them. What are APIs? The entire essence of APIs is to make a call to the server from the client, receive data, and display it. In other words, REST APIs work by fetching requests for a specific data set from a database, then returning all information relevant to the data set that was requested. However, there’s one more type of an API that is very frequently discussed – the SOAP API. SOAP translates to Simple Object Access Protocol, and, as the name suggests, it’s a little more complex than REST is – organizations usually use SOAP because they either require more comprehensive security measures, ACID (Atomicity, Consistency, Isolation, Durability) compliance, or because someone’s building an application that needs to interact with a legacy system. SOAP APIs are easily differentiable from their REST counterparts by simply looking at the code of the application – if an API is of SOAP type, the request to the API should be provided in an XML request body. Where are APIs Used? Organizations frequently make use of APIs in a couple of scenarios: - Login Interfaces – APIs are frequently used as companions to popular log in interfaces allowing people to log in to Facebook, Instagram, Twitter, LinkedIn, or other prominent social media websites with just the click of a button. Convenient, right? - Banking – remember the last item you bought through eBay or Amazon? The moment you’ve clicked a button, a call to an API was made, and that call allowed you to complete a transaction. You didn’t even think about that, did you? That’s how easy APIs make your work! - The Travelling Industry – most airfare and hotel searching websites make use of APIs in order to show the best deals. - The automobile industry – think of car manufacturers like Nissan or Tesla for example. There’s a very good chance that these cars are getting their updates transferred via the call of an API. - The entertainment industry – think of Netflix, Amazon Prime, or Hulu. Chances are that all of those platforms also use APIs to help you find content that interests you based on movies you’ve watched in the past or by you using the search engines available on those websites. - The information security industry – the website you’re reading a blog right now on – yes, BreachDirectory – also makes use of API capabilities to show data from a database. BreachDirectory makes use of the capabilities presented by APIs in order to show people information pertaining to their risk of identity theft on the web. The API is being implemented in a very wide variety of industries from information security to online shopping – and it helps stop identity theft attacks both during the day and during the night. By now, you should have a pretty good grasp about Application Programming Interfaces. The question is – will the capabilities provided by them help your applications thrive and succeed? And the most likely answer, unsurprisingly, is yes. Think about it – APIs are everywhere. If we judge by the use cases mentioned above, we can already see that the capabilities of APIs help a wide variety of industries and have a wide variety of use cases. APIs are already heading our way and it’s only a matter of time before an API is implemented into the company you happen to work at.The API of BreachDirectory is also helping numerous companies to put their security first – the BreachDirectory API lets companies search through hundreds of data breaches and secure their team in seconds. With an unlimited amount of requests, you will always be the first to know whether hackers are threatening the wellbeing of the infrastructure of your company. Don’t let your company be the next one in the list of leaked data breaches – implement a data breach API to be safe. Finally, the capabilities provided by the API might not be for everyone – if that’s the case, make sure to run a search through the data breach search engine to stay on the safe side. Until next time!
<urn:uuid:928913be-01db-4433-b0e8-349d10f3be73>
CC-MAIN-2022-40
https://breachdirectory.com/blog/api-security-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00150.warc.gz
en
0.942454
863
3.140625
3
Since founding VDOO, we have been working to analyze a great many IoT devices, in the broadest way possible. The more we look into these devices and find their vulnerabilities, the further we validate a basic hypothesis: security for the IoT must start with the most basic security building blocks. It is very challenging to add security to an operating device retrospectively, as it is to mitigate security concerns after-the-fact. Hence, security should be in the DNA of the device. There are just a few key steps which need to be taken in order to deal with most IoT attack vectors – from the simple ones that utilize default passwords, to the most complicated that exploit a newly-discovered. But, these few security essentials must be implemented correctly and accurately. These ‘security building blocks’ differ from one device to the next, since they are heavily dependent on the device’s attributes. But how can these principles deal with the unknown? With zero-day vulnerabilities? Is it even possible? We have learned from the traditional IT world that a new and unique security agent had to be developed specifically to deal with such vulnerabilities. So how can a mere few security building blocks be all that is needed for IoT? First, it has a lot to do with the goal of the attack, as well as the nature of IoT versus IT. For the most part, PCs have the ability to receive emails and browse the web, not to mention the endless ways in which the user can interact with the system. With many IoT devices, this is not and will not be the case. User interaction is designed to be very limited, so the malware delivery method would have to depend on things other than user interaction. In most cases, these would be inherent vulnerabilities, but as the reality stands today, it is primarily missing security essentials that are the cause. And no, it is not entirely possible to deal with all known and unknown vulnerabilities with just a small set of security building blocks. However, it makes it significantly harder to exploit vulnerabilities if basic security is implemented. Security fundamentals work For many different reasons, most devices lack security. In many cases, these devices do not even have the most basic fundamentals of security, such as enforcing a default password change, protecting the boot process, checking firmware integrity, or encrypting communication with the app controller or web services. For these devices, attackers do not necessarily even need to look for a vulnerability, as the flow of attack can be very straightforward. But, if basic security mechanisms are implemented, the attacker must look for vulnerabilities. These can be vulnerabilities in the security building blocks themselves or vulnerabilities in other software components of the product. If the basic security mechanisms are implemented properly, it would be very had for the attacker to find vulnerabilities to bypass them; and it will be much harder for the attacker to exploit vulnerabilities which are not part of the security building blocks, as these basic mechanisms will prevent them from doing so successfully. The last point is worth dwelling on: even if an attacker does find a severe vulnerability, it will be very difficult to exploit if basic security is implemented. A real world example To better clarify this, I will use a recent example from our labs. As part of our research process, we discovered several new vulnerabilities in a commercial IP camera that also had previously known vulnerabilities. Almost all of the vulnerabilities enabled an attacker to execute code on the device, allowing them to download files to the device, run processes, install packages and use the device as a bot or for another purpose. In other words, if the attacker finds a way to log in to the device, then this vulnerability will allow him to do whatever he wants. What needs to happen for malware to be able to exploit vulnerabilities like these? First, it needs to log in or bypass the authentication mechanism to gain access. In simpler terms – it needs the admin credentials to get in or it needs to break the login mechanism. Only then can it exploit the vulnerability to run code on the device. Devices that do not force a change from the default password are the easiest targets and responsible for some of the largest attacks to date. And yet it depends on such a simple mechanism: requiring the default credentials to be changed. Through implementing just this one change, the vulnerability in question becomes much less valuable, and much harder to exploit for a mass remote attack. Looking at other vulnerabilities in many types of devices (routers, cameras, doors, fire-alarms, smart TVs) we see exactly the same phenomena: ‘severe vulnerabilities’ which are considered to be very sophisticated, but whose successful exploitation by a remote source is highly dependent on a lack of basic security. If only the makers of these devices had taken care to properly implement the security building blocks in the first place, the chances that these severe vulnerabilities would be exploited could have been dramatically lower. Basic security, if implemented correctly, can constitute a very efficient way to make it hard for attackers to gain access, even when they do manage to locate a new vulnerability on the victim’s IoT device.
<urn:uuid:69bfe0a6-1181-4461-bf7d-4f4e31b5a499>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2018/04/30/good-security-foundations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00150.warc.gz
en
0.953727
1,033
2.703125
3
It sounds like a scene from a spy thriller. An attacker gets through the IT defenses of a nuclear power plant and feeds it fake, realistic data, tricking its computer systems and personnel into thinking operations are normal. The attacker then disrupts the function of key plant machinery, causing it to misperform or break down. By the time system operators realize they’ve been duped, it’s too late, with catastrophic results. The scenario isn’t fictional; it happened in 2010, when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyberattacks around the world increase, system operators worry more about these sophisticated “false data injection” strikes. In the wrong hands, the computer models and data analytics – based on artificial intelligence – that ensure smooth operation of today’s electric grids, manufacturing facilities, and power plants could be turned against themselves. Purdue University’s Hany Abdel-Khalik has come up with a powerful response: To make the computer models that run these cyberphysical systems both self-aware and self-healing. Using the background noise within these systems’ data streams, Abdel-Khalik and his students embed invisible, ever-changing, one-time-use signals that turn passive components into active watchers. Even if an attacker is armed with a perfect duplicate of a system’s model, any attempt to introduce falsified data will be immediately detected and rejected by the system itself, requiring no human response. “We call it covert cognizance,” said Abdel-Khalik, an associate professor of nuclear engineering and researcher with Purdue’s Center for Education and Research in Information Assurance and Security (CERIAS). “Imagine having a bunch of bees hovering around you. Once you move a little bit, the whole network of bees responds, so it has that butterfly effect. Here, if someone sticks their finger in the data, the whole system will know that there was an intrusion, and it will be able to correct the modified data.” Trust through self-awareness Abdel-Khalik will be the first to say that he is a nuclear engineer, not a computer scientist. But today, critical infrastructure systems in energy, water, and manufacturing all use advanced computational techniques, including machine learning, predictive analytics, and artificial intelligence. Employees use these models to monitor readings from their machinery and verify that they are within normal ranges. From studying the efficiency of reactor systems and how they respond to equipment failures and other disruptions, Abdel-Khalik grew familiar with the “digital twins” employed by these facilities: Duplicate simulations of data-monitoring models that help system operators determine when true errors arise. But gradually he became interested in intentional rather than accidental failures, particularly what could happen when a malicious attacker has a digital twin of their own to work with. It’s not a far-fetched situation, as the simulators used to control nuclear reactors and other critical infrastructure can be easily acquired. There’s also the perennial risk that someone inside a system, with access to the control model and its digital twin, could attempt a sneak attack. “Traditionally, your defense is as good as your knowledge of the model. If they know your model pretty well, then your defense can be breached,” said Yeni Li, a recent graduate from the group, whose Ph.D. research focused on the detection of such attacks using model-based methods. Abdel-Khalik said, “Any type of system right now that is based on the control looking at information and making a decision is vulnerable to these types of attacks. If you have access to the data, and then you change the information, then whoever’s making the decision is going to be basing their decision on fake data.” To thwart this strategy, Abdel-Khalik and Arvind Sundaram, a third-year graduate student in nuclear engineering, found a way to hide signals in the unobservable “noise space” of the system. Control models juggle thousands of different data variables, but only a fraction of them are actually used in the core calculations that affect the model’s outputs and predictions. By slightly altering these nonessential variables, their algorithm produces a signal so that individual components of a system can verify the authenticity of the data coming in and react accordingly. “When you have components that are loosely coupled with each other, the system really isn’t aware of the other components or even of itself,” Sundaram said. “It just responds to its inputs. When you’re making it self-aware, you build an anomaly detection model within itself. If something is wrong, it needs to not just detect that, but also operate in a way that doesn’t respect the malicious input that’s come in.” For added security, these signals are generated by the random noise of the system hardware, for example, fluctuations in temperature or power consumption. An attacker holding a digital twin of a facility’s model could not anticipate or re-create these perpetually shifting data signatures, and even someone with internal access would not be able to crack the code. “Anytime you develop a security solution, you can trust it, but you still have to give somebody the keys,” Abdel-Khalik said. “If that person turns on you, then all bets are off. Here, we’re saying that the added perturbations are based on the noise of the system itself. So there’s no way I would know what the noise of the system is, even as an insider. It’s being recorded automatically and added to the signal.” Though the papers published by the team members so far have focused on using their paradigm in nuclear reactors, the researchers see potential for applications across industries—any system that uses a control loop and sensors, Sundaram said. The same methods could be used also for objectives beyond cybersecurity, such as self-healing anomaly detection that could prevent costly shutdowns, and a new form of cryptography that would enable the secure sharing of data from critical systems with outside researchers. Cyber gets physical As nuclear engineers, Abdel-Khalik and Sundaram benefit from the expertise and resources of CERIAS to find entry points into the worlds of cybersecurity and computer science. Abdel-Khalik credits Elisa Bertino, the Samuel D. Conte Professor of Computer Science and CERIAS research director, with the original spark that led to creating the covert cognizance algorithm, and thanks the center for exposing him to new partnerships and opportunities. Founded in 1998, CERIAS is one of the oldest and largest research centers in the world concentrating on cybersecurity. Its mission, says managing director Joel Rasmus, has always been interdisciplinary, and today the center works with researchers from 18 departments and eight colleges at Purdue. Abdel-Khalik’s research is a perfect example of this diverse network. “When most people think about cybersecurity, they only think about computer science,” Rasmus said. “Here’s a nuclear engineering faculty member who’s doing unbelievably great cyber and cyberphysical security work. We’ve been able to link him with computer scientists at Purdue who understand this problem, but yet don’t understand anything about nuclear engineering or the power grid, so they’re able to collaborate with him.” Abdel-Khalik and Sundaram have begun to explore the commercial possibilities of covert cognizance through a startup company. That startup, Covert Defenses LLC, has recently engaged with Entanglement Inc., an early-stage deep tech company, to develop a go-to-market strategy. In parallel, the team will be working to develop a software toolkit that can be integrated with the cyberphysical test beds at CERIAS and the Pacific Northwest National Laboratory, where sensors and actuators coupled to software provide a simulation of large-scale industrial systems. “We can provide additional applications for the technologies that he’s developing, since this is an idea that can help nearly every cyberphysical domain, such as advanced manufacturing or transportation,” Rasmus said. “We want to make sure that the research that we’re doing actually helps move the world forward, that it helps solve actual real-world problems.” The use of SCADA and industrial control systems in nuclear power plants brings cyber security problems and computer incidents to the attention of researchers. Not only nuclear power plants but also all relavant information in this category are highly critical. Attacks against platforms that hold rich information on nuclear power plants can be witnessed.18 The seven cyber incidents outlined below offer insight into the scale and severity of cyber malfunctions and attacks Supervisory Control and Data Acquisition (SCADA) and Human Interaction In the 21st century, national security is tied to the economy, which is highly dependent on energy and critical infrastructures. High electricity production as well as consumption forces states to focus on energy security. Most states use different sources of energy to fullfil their electric needs. The electric grid and its components are almost always controlled by information technology. National security in the modern age relies on hardware, software, and human-machine interaction more than ever before. For this reason, it is possible to paralyze a nation with sophisticated cyber attacks. With the realization of what devastating cyber attacks can lead to, states have begun to develop national strategies defining their cyber positions and capabilities in the event of an attack. Through defining major threats, these national cyber strategies determine how agencies and institutions should prepare themselves. States must harmonize their efforts to address structural and technological challenges resulting from changes in mentality, data, and the Internet. Before 1957, computer technology had limited capabilities, executing tasks one at a time in a process known as batch processing. Researchers had no direct access to computers. In addition to insufficient processing capabilities, computers were physically big, requiring huge rooms equipped with coolers. Before the advent of more advanced, modern technology, using computers was a long and time-consuming process. The direct connection to servers that researchers achieved in 1957 was seen as a major milestone in computing technology, even though remote connection to servers had its limitations. High demand led to the time-sharing concept, which permitted different researchers to directly connect to servers over a limited period of time. This concept first emerged so that multiple users could share the processing power of a single computer. This process also created user accounts and management strategy to access the server. Computer technology in the 1960’s was far from user-friendly, usable, and accessible. The necessity to connect scholars pushed researchers to create a network that permitted users to share files.40 The space race between the U.S. and U.S.S.R. facilitated the improvement of computing technology. In the 1960’s, universities were relucant in sharing their computer resources with other users on ARPANET, pushing them to use a small computer called the Interface Message Processor (IMP) before the mainframe to control the network processes. The mainframe was only responsible for the initialization of programs and data files. The interaction of networks thus led to the Network Control Protocol (NCP), in which the Transfer Control Protocol verified the various computers on the network. The rising number of participants introduced new technological improvements to the net. The introduction of e-mail, inter relay chat (IRC) systems, and Bulletin Board System (BBS) boosted the number of network users.41 These platforms also paved the way for computer- mediated communication and initiated the sharing of information among different groups. Hacker groups and technology fans mostly used these earliest forms of computer-mediated communication platforms. After the 1990’s, the growing number of Internet users drastically changed human-machine interaction. This development quickly evolved into intensive computer-mediated communication. Hackers and cracker groups42 in different parts of the world shared their technological expertise. These groups also played an important role in cultivating hacker culture and capabilities. Unauthorized access to computers increased swiftly in places where the network was available. For example, Group 414, formed by a group of teenagers from Milwaukee, launched attacks against Los Alamos National Laboratory, Sloan- Kettering Cancer Center, and Security Pacific Bank. Attacks instigated by the hacker group Legion of Doom forced the government to take steps toward the computer security act. As computer technology continued to develop, automation became more common, requiring less human intervention in its routine processes. The major process control computing technology is the supervisory control and data acquisition (SCADA) system. In the early years of computing technology, SCADA systems were monolithic structures, which generally held all operations on a mainframe but limited the capabilities of monitoring systems. After the improvement of time management capabilities of central processing unit (CPU) in mainframe, industry started using distributed SCADA systems. Distributed SCADA systems often share control functions and real-time information with other computers in the local area network. These types of SCADA systems also perform limited control tasks better than monolithic SCADA systems. In most nuclear power plants, the following three components comprise SCADA systems: - Sensors that measure the condition in specific locations; - Operation equipment such as pumps and valves; - Local processors which communicate between sensors and operation equipment43. There are four different types of local processors, including Programmable Logic Controller (PLC), Remote Terminal Unit (RTU), Intelligent Electronic Unit (IED), and Process Automation Controller (PAC). The following are the main goals of processors: to collect sensor data; turning on and off operating equipment based on internal programmed logic or based on remote commands; translating protocols for the communication of sensors and operation equipment; identifying alarm conditions; and short-range communication between local processors, operation equipment, and sensors. This type of communication mostly flows through short cables or wireless connections. Host computers act as the central point of monitoring and control. The human operators monitor activity from host computers and take supervisory action when necessary. It is possible to change the rights and privileges of host computers by accessing the Master Terminal Unit (MTU). Long-range communication travels between the local and host computers, using different methods like leased lines, satellite, microwave, cellular packet data, and frame delay. These types of SCADA systems can communicate through Wide Area Networks using ethernet or fiber optic connections. SCADA systems use several programmable logic controllers (PLC) to monitor the different processes and to make necessary adjustments for the regular flow of operation. These PLCs also alert the operator when human intervention is required. The rising connectivity of SCADA systems permits including human operators to monitor the process with real-time data through a monitor. Yet connectivity makes the system more vulnerable to network attacks. In these networked SCADA systems, carry the human machine interaction into another level. The networked SCADA systems underlined the importance of human operators and their role to monitor the alarms for the survival of the critical infrastructure. Human operators form the vital nodes for the function of critical facilities like nuclear power plants. In nuclear power plants, human operators are the first level of protection in preventing an accident or noticing a problem. In the control room, the operator has to check designated indicators of its station and make the necessary adjustments to sustain the continuity of the process. The process of human-machine interaction faces two major problems: human centered and hosting computer interface-centered. The software that controls and communicates with SCADA systems is designed to provide required information and initiate alarms to alert human operators when a problem arises. Early designs of SCADA systems showed interface designs that were primitive and not focused on the cognitive and psychological awareness of the operators. The biggest problem with interfaces comes from static design which is characterized by a lack of movement and animation. Poor graphics accompanied the interface and only change when triggered by alarms. The alarms themselves had no varying alarm types according to the threat level. In some cases, the size of the alarm messages prevents the operator from seeing other information on the screen. Peripheral equipment, such as monitors and keyboards, were also not designed to permit the operator to easily comprehend the information and respond quickly with as little effort as possible. In the old interface designs, information was dispersed across three to four monitors. Insufficient screen space was one of the problems reported by the operators. In a modern nuclear power plant, the interface has to be designed with a higher resolution, permitting operators to follow the entire process on one large monitor no smaller than 40 inches. During the acquisition process, hardware experts specializing in screens must determine the monitor.44 The large screen promotes teamwork in noticing errors and increases the situational awareness of the operators. The host computer’s interface is critical to catching anomalies that might be the result of a cyber attack.45 Problems Induced by the Human Factor Following such a static monitoring process requires a high level of alertness and attentiveness and is not easy for an operator to sustain this mode throughout his or her shift. This is not a personal problem but an issue of human cognitive and physical capabilities. As different SCADA systems use different interfaces, human operators need time to adapt to the new interfaces. In the early months of training, the interfaces confuse operators with the multitude of alarms, messages, and information. After the adaptation period ends, the development of tunnel vision appears as a risk as human operators acclimate to static interface designs and tedious repetitions.46 In the beginning, being a human operator seems like a dynamic post, but as time goes by, the alarms become routine and daily tasks extend response time. According to one report on this topic, “the maximum manageable alarms per hour per operator are around 12, and around 300 alarms per day and most of the required operator actions during an upset (unstable plant and required intervention of the human) are time critical. Information overflow and alarm flooding often confuse the operator, and important alarms may be missed because they are obscured by hundreds of other alarms.”47 Operators complain of many distractions in the control room, including human interruption and phone calls. Peace and quiet in the control room is critical to allowing operators give their full attention to the screens they are monitoring. Consequently, unauthorized personnel in the control rooms would further jeopardize the security of facility. Since the human machine interface is the only window to monitoring nuclear energy plants, the human operator and his or her host computer are critical in preventing an accident or security breach. However, most human machine interfaces bring their own set of security concerns due to problems in design. Most of the Human Machine Interfaces (HMI) is designed to provide relevant information to human operators in 2D graphic design. The main focus of HMI designs are functionality, usability and visibility. The neatly and interactive desings are crucial to support the attention of the operator. Thus, the human – machine interface is transforming into a front for cyber defence. The HMI also functions as the defender of a system against abnormal activities. The basic principle of a sustainable security system is to implement a precise and clear security policy, of which major points have to be defined by state regulations and institutional details must be written by organizations. Formulating a security policy would help managers to build measurable and self-perpetuating systems where the division of labor is clear-cut. Computers and electronic devices connected to local networks maintain the physical security of power plants. Their network connectivity, however, makes them especially prone to cyber attacks. Therefore, strong communication and cooperation among the managers of physical and cyber security fields is a must. Both managers have to know the others’ field to grasp the details and prepare for possible threats. Security has to be understood as a continuously evolving cycle that must be assessed regularly according to the changing nature of threats. In nuclear power plants, the conventional security approach draws fixed limits for physical and cyber security sectors. In the age of hybrid entities, the international community must implement smart security policies that provide flexibility, adaptability, and cooperation. For the new facility in Turkey, the physical and cyber security managers of the nuclear power plant (or critical infrastructures) have to follow these major points: - Understand legal and regulatory requirements in Turkey and internationally; - Integrate security into the organizational culture and insist on the perception by all stakeholders; - Develop effective risk assessment programs; - Develop holistic governance programs for managing information risk; - Assess the impact of human factors and security strategies and potential breaches of security; - Develop emergency management policies; - Develop and ensure quality control in information assurance and security management; - Improve alternative communication technologies for emergency cases; - Follow new technologies to upgrade the security level of the facility. On the first day of operation, the nuclear facility is equipped with the latest technology to work smoothly and securely. However, the emergence of new technology presents the question of how frequently a power plant should update its technology. There are various academic assumptions that focus on the market competition of a facility. Facility managers and government officials must periodically discuss emerging technology and assess the current condition of plants from a security perspective. The maintenance and update of the security system is as critical as writing the security policy of the plant.48 The technological protections tailored to specific nuclear power plants create over-reliance on these tools at the expense of human capacity. However, the capabilities of a plant’s personnel are critical to the planning, update, and maintenance of the facility. Safe security systems could be breached due to poor training, inattentiveness, and lack of necessary maintenance of staff. Continuous training and coordination of the disparate security systems in the nuclear power plant are vital to sustaining nuclear safety. Attacks on nuclear facilities can require the coordination of perimeter security officials, cyber security managers, and SCADA engineers. In such an environment, division of labor must be clearly defined and implemented by managers to prevent a chaotic environment in the case of an emergency. Another critical security aspect is dissemination. It is a known truth that facility employees rarely read security policies and amendments to security regulations. Motivating employees to follow these technical information and policy documents, and to take necessary caution when disseminating information presents a challenge. An administrator has to find ways to motivate the employees to abide by the security culture once it is established. In the Turkish case, the language barrier presents another issue. Operator companies (Russians in Akkuyu and the French and Japanese in Sinop) have to ensure that technical and policy documents are available in Turkish in order to overcome any misunderstandings and prepare for contingencies. Security Levels and Security Clearance Cyber protection of nuclear power plants requires commensurate attention to perimeter security. Physical security comprises an indispensable part of cyber security since nuclear power plants run its firewalls and intrusion detectors on physical servers. Accessing them would be the first step in an attack. Fiber optic cables and other exposed connections must be protected from malicious attack. In some cases, scissors would be more harmful than Trojan viruses. Therefore, the protection of computer systems, cables, and connections to the electrical grid should be categorized as high-risk assets. Inside the power plant, computers should be categorized according to their security clearance level. Lower-level computers’ access to high- security computers should be banned. These security protocols should be checked periodically with the assumption that security rules are not being followed. All these security measure are tied to the control of any equipments which has electromagnetic capacity used in the screening process at the entrance of a protected area. Since the coverage of electromagnetic devices are so large, the site management will decide how to limit these types of devices. Stuxnet showed that mobile devices, cellular phones, USB devices, NFC devices, RF devices, external hard disks, laptops, CPU-operated devices, and any device with bluetooth and wireless connectivity could be used to transfer malware. Admitting entry to these devices into facility grounds must be limited and under strict control. There are examples of facility employees that to use their relationship with screening officers to bring their magnetic devices into protected or vital areas. All visitiors have to follow screening process of the facility and drop (and lock) their electromagnetic devices to the reserved boxes for their usage. To prevent tailgating, the use of mobile phones in the entrance of checkpoint has to be restricted.49 The electromagnetic devices have to be collected from visitors and must be kept in a Faraday cage in the protected area of a nuclear power plant to prevent possible intrusion to the network’s system. The screening process should be repeated upon exiting the facility to ensure no magnetic devices are taken out of the site. The computer and network systems of a nuclear facility are another major security concern. Nuclear power plant systems require hardware replacements and maintenance from time to time. The regulator has to organize how the operator will design the hardware support system. All new hardware should be tested and observed by national authority of test bed. Since the processes take time, the regulator has to encourage the operator to create a hardware management system before the operation of the facility to stock the spare parts. By this way, in any breakdown the facility management quickly replaces the required parts without any delay. Also, third-party contractors should go through background checks. Since Heating, Ventilation, Air Conditioning (HVAC) management systems are designed for functionality and robustness but not security, these are considered less secure components of nuclear power plants. However, today’s HVAC systems are IP-taking appliances which are connected to local networks. To upgrade and patch the systems, the contractors access the HVAC servers from outside the facility. The vulnerabilities of these servers are quickly turning into systemic risks. Any intrusion to these HVAC systems could easily be used for a hybrid attack. The regulators and operators of nuclear power plants must be sensitive to the HVAC systems at all levels of security.50 reference link :https://edam.org.tr/document/CyberNuclear/edam_cyber_security_ch4.pdf More information: Arvind Sundaram et al, Covert Cognizance: A Novel Predictive Modeling Paradigm, Nuclear Technology (2021). DOI: 10.1080/00295450.2020.1812349 Matthias Eckhart et al, Digital Twins for Cyber-Physical Systems Security: State of the Art and Outlook, Security and Quality in Cyber-Physical Systems Engineering (2019). DOI: 10.1007/978-3-030-25312-7_14 Yeni Li et al, Data trustworthiness signatures for nuclear reactor dynamics simulation, Progress in Nuclear Energy (2021). DOI: 10.1016/j.pnucene.2020.103612 Arvind Sundaram et al, Validation of Covert Cognizance Active Defenses, Nuclear Science and Engineering (2021). DOI: 10.1080/00295639.2021.1897731
<urn:uuid:2d72566a-9d2c-45df-8b5e-89f8ea3e4dc1>
CC-MAIN-2022-40
https://debuglies.com/2021/10/09/covert-cognizance-self-aware-algorithm-to-ward-off-hacking-attempts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00150.warc.gz
en
0.935034
5,599
2.984375
3
Danger often comes from an unexpected direction. For example, while you are alert to pickpockets, criminals may be approaching invisibly, over Wi-Fi. Here’s a typical scenario: Let’s say you meet up with friends at a café and have a bite to eat while deciding what to do next. Maybe you decide to continue on to a movie. Or a play. Or a concert. That’s when you connect to an available Wi-Fi hotspot and buy tickets online. Soon after, you find your credit card has been maxed out. Sounds terrible, doesn’t it? Wouldn’t it feel fair and just to find the culprits and take them to police? OK, let’s try: Do you remember that while you were enjoying your meal with friends, two young people at the table next to you had just finished yet another cup of coffee? They looked ordinary, having a quiet conversation and occasionally peering at their laptop. But what you didn’t see was the special equipment in their bag, something like this: These people came to the café not for coffee and croissants but to steal data from visitors. They created an open Wi-Fi hotspot to attract victims and got access to all traffic sent and received by the devices of anyone who connected to their hotspot. Someone logged in to an online bank and the criminals got their credentials. The couple at the next table over logged in to Instagram to post a selfie and criminals owned access to their social network. Your friend checked her corporate e-mail and — well, you see where we’re going with this. To accomplish this kind of thievery doesn’t require high-level programming skills. YouTube has more than 300,000 videos that explain how to hack Wi-Fi. Moreover, the necessary equipment is cheap — less than $100. Having received your banking and personal data, cybercriminals can continue the attack and gain substantial profit. How it works There are several ways to gather data with the help of fake Wi-Fi. 1. Sniff network traffic A method as old as time — eavesdropping — works with Wi-Fi as well. Common plugins and apps can turn your smartphone or laptop into a sniffer — an eavesdropper — and in addition, you can purchase specialized and powerful equipment online. Thus equipped, you’ll be able to intercept data transferred over the air and fish out useful files such as cookies and passwords. Of course, you’ll need an unencrypted or poorly protected network (for example, the secured with weak WEP protocol) to listen in on other people’s business. The WPA and, especially, WPA2 protocols are considered more reliable. Here’s a look at what eavesdropping looks like on hacker’s end. — Kaspersky Lab (@kaspersky) December 7, 2015 2. Create a rogue (fake) hotspot This is what criminals did in our example. The thing is, people place a certain amount of trust in the places we visit: For example, we trust that the food in a café will not make us sick, the staff will be polite, and the Wi-Fi will be secure. Cybercriminals take advantage of that trust. For example, you will often see several Wi-Fi networks in hotels. They are usually created in popular places whose many visitors create too high a load for one network to serve reliably. But there’s nothing to stop criminals from making a Hotel Wi-Fi 3 network in addition to the Hotel Wi-Fi 1 and Hotel Wi-Fi 2 already set up. 3. Execute the “evil twin” attack In fact, this is a variation on the previous method. Computers and mobile devices usually remember the networks they’ve connected to before so that they can do it again automatically. Sometimes criminals make copies of the names of popular networks (for example, free Wi-Fi connections in coffee shops and fast food chains) to fool your devices What can you do? We recommend reading our post that explains in detail how to use public Wi-Fi securely, but just in case, here are four must-follow rules. a) Do not trust unprotected networks that don’t ask you to enter a password. b) Turn off Wi-Fi when you don’t need to use it. c) Trim your list of remembered networks from time to time. d) Do not use online banks and do not log in to important sites in cafés, hotels, malls, and other unreliable places. The good news is, all users of Kaspersky Internet Security — Multi-Device and Kaspersky Total Security — Multi-Device can protect themselves with the help of our new Secure Connection component. If you turn it on, Secure Connection will encrypt your data every time you connect to public Wi-Fi and other unreliable networks. You can set up this component flexibly, programming it to turn on automatically when you: - connect to unreliable Wi-Fi; - access banking and payment systems; - purchase something online; - use your e-mail, social networks, messaging, and other Internet communication resources. In all of these cases our solutions will protect you and your data!
<urn:uuid:f8b0abb1-a47d-434d-90f2-9330dbe0fbb9>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/kaspersky-secure-connection/13074/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00150.warc.gz
en
0.935722
1,107
2.71875
3
Researchers can't account for human behavior, however. In the early stages of the Ebola outbreak, the World Health Organization, Doctors Without Borders and other aid organizations concentrated their efforts on the ground. They tried to convince patients to go to hospitals or let aid workers set up quarantine areas in their homes. Unfortunately, these and other interventions did little to slow the outbreak. According to the WHO, the number of cases has nearly doubled in the last three weeks, prompting Sierra Leone’s government to enforce a three-day lockdown over the weekend. On September 17, WHO director general Margaret Chan said there are now at least 5,357 reported cases, including 2,630 deaths, in Guinea, Sierra Leone, Liberia, Nigeria, and Senegal. "None of us experienced in containing outbreaks has ever seen, in our lifetimes, an emergency on this scale," she said. She has previously said the numbers are an underestimate, as there are many unreported cases. On September 16, the U.S. Centers for Disease Control and Prevention called the outbreak the world’s first Ebola epidemic. As the speed of this outbreak increased, experts planning the response started relying more heavily on computer models, says Dr. Martin Meltzer. Meltzer is a senior health economist at the CDC, where he leads the Health Economics and Modeling Unit. On August 4, Meltzer started building the CDC’s Ebola models, called EbolaResponse. Right now, the goal is to use the models to understand how to get the rate of transmission to an average of less than one infected person per infectious person, Meltzer told me. This is what has worked in stifling previous infectious disease outbreaks. His first step is plotting the number of cases, to determine how many more will emerge if the outbreak continues at its current rate. According to a paper he published today in the CDC’s Morbidity and Mortality Weekly Report, the model shows that there will be 8,000 cases in Sierra Leone and Liberia by September 30 if there is no significant increase in the impact of health interventions. When the CDC corrects for potential unreported cases, the number spikes to 21,000 cases by September 30. Looking farther ahead, the numbers are shocking: By January 2015, Sierra Leone and Liberia could have 550,000 cases, or 1.4 million if corrected for underreporting. Meltzer then uses the model to understand how to stop this from happening by testing different hypothetical situations. Experiments with the models have suggested that roughly 70 percent of patients would need to be in an effective quarantine setting, either at a hospital, Ebola treatment unit, at home, or through a safe burial, in order to bring the outbreak under control. Approximately 10 percent of patients are currently in these safe settings. If the goal of 70 percent is reached by December 22, the epidemic in Liberia and Sierra Leone “would almost be ended by January 20, 2015,” Meltzer wrote. Even then, there are things that the model cannot account for. “Appreciate that not all hospitals/Ebola treatment units and certainly not all households with a patient ‘at home with effective quarantine/ isolation’ will be entirely secure,” Meltzer wrote in an email. “We can expect some transmission to occur at such locales, but hopefully on average, less than 1 person infected per infectious person.” The models show that in effective hospital quarantine, the rate of transmission is 0.12; in effective at-home, it’s 0.18; when they are not in effective isolation, it is at least 1.8. When I spoke with Meltzer in August, he had just finished a meeting with Dr. Bryan Lewis, a computational epidemiologist at the Virginia Bioinformatics Institute (VBI) at Virginia Tech. It was an impromptu meeting in which Meltzer offered Lewis advice on the types of information aid organizations need right now. Since early July, Lewis and graduate student Caitlin Rivers have also been modeling the outbreak. (Lewis’s research funding does not cover Ebola modeling, but his funders at the National Institutes of Health gave him clearance to concentrate on Ebola, given the circumstances.) By July 18, they had a preliminary model. It is a variation on a classic model called SEIR. The SEIR model estimates how a disease will spread through an entire population, assuming that there is normally a natural balance between death rates and birth rates within that population. It sorts people into four categories: susceptible (everyone is automatically put in this category at birth), exposed (in the case of Ebola, it can take up to 21 days for signs of infection to appear), infectious (this stage lasts an average of 3 to 15 days for fatal cases, and 10 to 25 days for recovered patients; this category includes the bodies of deceased victims) and removed (patients who have been treated successfully are considered immune). The model looks at how the sizes of the different subgroups change relative to each other. Their model is slightly different from the CDC’s which looks at patients through five stages: susceptible, infected, incubating, infectious, and recovered. Lewis and Rivers’s model uses 12 parameters for incubation and infectious periods, because these periods can vary so much, as well as different modes of transmission. The different modes of transmission are in the hospital setting, in the community, and in a funeral setting. To determine how many new cases are transmitted in each setting, the models require data on how people interact, who interacts enough that transmission is possible, and how many cases already exist. This data is sparse in West Africa, as is the case in many third-world areas. (Researchers studying other infectious diseases, such as dengue fever in South America, have tried putting GPS trackers on individuals in order to understand social interactions. No such project is currently underway in West Africa, as far as Meltzer or Lewis know.) The countries impacted by this Ebola outbreak been ravaged by civil wars within the past 15 years. In Liberia, at least 350,000 people were killed in civil wars from 1989 to 1996 and1999 to 2003. In Sierra Leone, an 11-year civil war ended in 2002, with more than 50,000 dead and much of the country decimated. Guineans faced decades of military coups and misrule until 2010. These areas are still recovering, still rebuilding governments and trust, which makes it more difficult to collect data. It’s likely, according to the WHO, that cases are going unreported. Meltzer asks task-force leaders to provide as much empirical data as possible, and the CDC is currently working with local governments to standardize data collection, but for now, the CDC and VBI’s models rely on two data sources: publicly available cases over time, and historic case information regarding incubation period and length of infectious period. Rivers also combs through local news reports and anecdotal evidence to determine how many of the new infections emerge from funerals. In West Africa it is customary for mourners to touch the body of the deceased during a funeral. Since Ebola lives on after a person has died, funeral-goers are considered at high risk for transmission. More often than not, Rivers has to input the collected data into spreadsheets by hand. Once VBI’s model for February to June started to match what the researchers already knew had happened during those months, they started to use it to project what is still to come. They run the model for the overall outbreak, as well as individually for Guinea, Sierra Leone, and Liberia, every week to predict what will happen in the coming weeks and to test the possible effects of different interventions. The picture, as expected, is bleak. Rivers and Lewis hesitated to share actual figures from their forecasts, because the course of the outbreak could shift in ways the models cannot predict. All of their efforts, all of the modeling, can tell them what will happen if the interventions start to make an impact. The models can tell aid organizations what they need, why they need it, and when they’ll need it. The models cannot, however, tell them how to achieve the necessary benchmarks, because there is no way to model for human behavior. “In particular Ebola is strongly determined by human behavior, and it’s pretty impossible to predict how humans are going to start doing things. At the moment there’s a lot of resistance to some of the classic public health interventions,” Lewis said, citing the recent looting of a treatment center in Monrovia. On September 18, Reuters reported that three journalists and five health care workers were brutally killed while spreading Ebola awareness in Liberia. “It makes a lot of sense that the population doesn’t trust authority figures, because why should they? Those are the people that burned down their villages previously. It’s going to take a longer period of time for the population to understand that the people showing up in these scary white suits are actually helping people instead of killing them.” “The trouble is to get people to believe that going to the hospitals is in their best interest,” said Meltzer. “We’ve got to get people to understand that. You can go around to villages and cities and slums all you want and say, ‘If you’re ill, go to the hospital.’ Why should anybody believe? We can’t model that.” They can collect empirical data from aid workers in different locations. They can see how effective a message was in one community, and see if that effect would be worth the effort in other areas, but no matter what, they can’t model human behavior. The interventions seem to be working in Guinea, where the rate of transmission is now less than one, according to Meltzer. They have cut off the chain of the disease there, he said, and must stay vigilant in order to keep it that way. Across West Africa as a whole, though, the average transmission rate is still higher than two new infections per infectious person, according to Maia Majumder. Majumder is a Ph.D. student at MIT Engineering Systems and research fellow at HealthMap. Her group is working with an IDEA (Incidence Decay and Exponential Adjustment) model in order to estimate the growth and longevity of the outbreak, and how transmission rates change when various interventions are attempted. They are looking at some of the more recent interventions, such as closing borders and opening new treatment centers. Their outcomes show “that what we’ve done so far has not yet been enough to make a real dent in decreasing the growth rate of the outbreak,” she said. “If nothing changes in the disease control and prevention department, case counts will continue to soar.” It will start to subside, though, if the interventions start to make an impact. But it’s impossible to predict if the interventions will make an impact, because that depends on human behavior. Even if the average is brought below the target of one new infection per infectious person across the entire region, though, there will still be dangerous outliers that the models do not take into account. “Some sick people won't infect anyone, and some will be super-spreaders, transmitting to perhaps a dozen people,” Rivers said. Regardless, the goal remains lowering the average rate of transmission as much as possible. This fall, experimental Ebola vaccines are being tested in small clinical trials. Assuming the vaccination is 100 percent effective, Rivers said, “We would hypothetically need to vaccinate around 50 percent of the susceptible population to achieve herd immunity.” Herd immunity is a term epidemiologists use to say that because enough people are vaccinated, the community at large is considered immune to a disease. If herd immunity is achieved, there is a very low probability of an infected individual coming in contact with a susceptible individual. The 50 percent figure assumes that everyone interacts with everyone else. Fewer people would need to be vaccinated if they instead use the approach of ring vaccination, in which you only vaccinate people who have or will come in contact with infectious people. The ring vaccination approach was successfully used to eradicate smallpox. Lewis and Rivers are using their models to determine if a vaccine, or other new interventions, will be “just a drop in the bucket” or if they are worthwhile. According to the Associated Press, doctors are also trying blood transfusions from survivors to current patients. British nurse William Pooley will be donating blood to an American patient. An experimental treatment called ZMapp was successfully used to treat two infected American healthcare workers, and organizations are considering whether to produce it on a mass scale. Unfortunately, Rivers explained, devoting the available funds to a large-scale production of the treatment would not be as effective as concentrating on a vaccine, because, at least historically, prevention has a bigger impact than treatment. Even if the models show that vaccines are the way to go, though, the models will still fall short of telling aid organizations how to convince those people that getting the vaccine is in their best interest. How to make that happen can only be determined through trial and error, through trying different messages and seeing which catch on. Right now, the most important thing aid organizations can do is to increase the percentage of patients in effective isolation settings in Liberia and Sierra Leone, according to the CDC report. (It is already considered controlled in Guinea, and has not spiraled out of control in Nigeria or Senegal.) If they are able to increase the percentage from 10 to 13 starting today, hit 25 percent on October 23, 40 percent on November 22, and the 70-percent goal by December 22, the outbreak will peak at up to 3,408 daily cases, and will drop to less than 300 daily cases by late January. If they don’t start increasing from 10 to 13 percent until October 23, there could be 10,646 daily cases (accounting for underreporting), and if it’s pushed back to November 22, there could be 25,847 daily cases by January 20, 2015, according to the CDC’s Ebola Response model. In that case, there could be more than 1.4 million cases of Ebola in two countries with a combined population of approximately 10.4 million people. Even still, Meltzer, Lewis, Rivers, Majumder and other computational epidemiologists around the world are keenly aware that experimenting with computer models offers a forecast much faster than experimenting on the ground. And in the event that there is a sudden shift in public sentiment, in public trust and willingness to seek treatment, Meltzer’s model will already have told the CDC how to prepare for it. NEXT STORY: Original VA Blogger Joins Nonprofit
<urn:uuid:c70d2205-d5af-4542-b8dc-42ac3e8fd514>
CC-MAIN-2022-40
https://www.nextgov.com/cxo-briefing/2014/09/scientists-turn-computer-models-predict-ebolas-next-move/94970/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00150.warc.gz
en
0.964488
3,054
3.53125
4
As the day when US citizens cast a vote for their preferred presidential nominee quickly approaches, the issue of whether the actual voting process can be tampered with is a topic that interests many. It is widely believed, but never officially confirmed, that the DNC hack – and subsequent leaking of data stolen during the breach – is the work of hackers backed by the Russian government and president Vladimir Putin. As Harvard law professor and former US assistant attorney general Jack Goldsmith pointed out to Ars Technica, the Russian government has, in the past, used “social media disinformation, denial of service attacks, and hacking campaigns to shape the political landscape in former Soviet states and elsewhere in Europe frequently over the last decade.” But would they attempt to infiltrate the US e-voting machines and system in order to influence the actual voting outcomes? It’s possible. Is it likely, though? That’s up for debate. It’s not that they – or anyone else – couldn’t. Since late August, the Institute for Critical Infrastructure Technology has been publishing analyses about the ease of hacking voting machines, interfering with campaigns, stealing data, and so on. Andrew Appel, a professor with Princeton University’s Computer Science Department, has also recently published a rundown of some of the electronic voting machines used in the US, and their vulnerability to hacking. While some of them can be hacked through the Internet, all can be hacked by attackers with physical access to them, and only a few allow for the possibility of an audit or recount of (paper) votes in order to check for possible interference. Generally, security experts have been warning for years about the “hackability” of electronic voting machines, with few positive results. As with any other technology so far, security is still in the back seat, even though there is confirmation that tamperings with electoral systems has already been happening. US Congressman Hank Johnson is currently trying to minimize this risk, by introducing two bills for the US Senate to vote on: - The “Election Infrastructure and Security Promotion Act of 2016”, which would make voting systems part of the country’s critical infrastructure. The bill would require the Department of Homeland Security to protect it, and would promote the development of security standards and innovative security solutions. - The “Election Integrity Act” that (among other things) would limit the purchase of any new voting systems that do not provide durable voter-verified paper ballots, and enable verifiable manual audits of federal elections. These bills will surely not be voted into law before these presidential elections, but it’s good to see that some legislators are taking the threat seriously.
<urn:uuid:501f1bb6-b5cd-47ba-989c-35e1119ce9d6>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2016/09/23/hacking-e-voting-machines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00350.warc.gz
en
0.94816
554
2.5625
3
Ever wonder what the Simple Certificate Enrollment Protocol (SCEP) section is when setting up a configuration profile? Are you curious if you should be setting it up? Will something amazing happen if you did? Let’s see if we can help shed some light on the subject. First, it’s important to understand that IT environments are using certificates more and more for authentication. The proof is in all of the certificates being used for communication between an iOS device and a MDM server. Even the JAMF Software Server (JSS) has a built-in certificate server (called a certificate authority.) Some IT environments also require certificates be in place for network access. So, certificates are everywhere! That’s great, but why certificates instead of good ol’ username and passwords? First, certificates are very mathematically complex and very difficult to figure out. Second, they’re very easy to change without bothering the end user. But, getting a certificate used to be a real pain. You had to get a special link from your environment’s security professional, go to a web form, fill out a lot of details, and download the certificate. Once you had the certificate, you’d often need instructions on how to use it. This was a tolerable process when only a few needed to use certificates, but now more and more people need many certificates to do even basic tasks. How SCEP helps SCEP was designed to automate the process of getting these certificates out to people. Let’s say that your network administrator has setup the wireless so that you had to have a certificate to get onto the network. Well, if you had an SCEP server, the computer—not the end user—would contact the certificate authority and request the certificate, download the result, and put it in its proper place. Do you need to do it? Well, it depends on if you have the need for certificates and if you have an SCEP server on your network. If you do, you really should consider using your SCEP server. It’ll make things a LOT easier in the long run. Steps for configuring SCEP Now that you know why you’d want to use SCEP, let’s cover how you’d configure it in the JSS. First, SCEP is configured in the configuration profiles section of the JSS under Computers or Mobile Devices. (Note, if you can’t press the add button, ensure your JSS is setup for MDM.) Next, add a new configuration profile. Navigate to the SCEP server tab, and click configure. Fill out the details provided by your security professional. These details give information about the server and include things such as the server URL, the instance name (in case the same server provides more than one SCEP service), and authentication details. The authentication parameters provide a means to ensure that only valid customers and devices are given a certificate. Once you fill out the SCEP details, you can start to use this SCEP certificate for other parts of the same configuration profile. For example, if you add the VPN and choose certificate authentication, you can drop down the SCEP certificate as an option. What does this actually mean for the end users? From a single configuration profile, they can request a certificate from the SCEP server and start using that resulting certificate as an authentication method. Tips for SCEP use Remember, that in order to configure a service using the resulting SCEP certificate, you have to configure that service in the same configuration profile as the SCEP service. If you configure all this, your devices should start contacting your SCEP service automatically, downloading certificates, and then turning around and using them as a means to authenticate other services on your network. Have market trends, Apple updates and Jamf news delivered directly to your inbox.
<urn:uuid:c18e9853-e739-461b-b4d1-59467ce235a1>
CC-MAIN-2022-40
https://www.jamf.com/blog/the-ins-and-outs-of-scep-for-casper-suite-administrators/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00350.warc.gz
en
0.913131
859
2.75
3
Introduction to Evolution of Cloud Computing Cloud computing is playing an important role in everyone’s life. We cannot imagine a world without Cloud Computing techniques to handle Big Data. But we didn’t reach this level in few years. The evolution of Cloud Computing can be traced back to the 1950s. Are you surprised? Today in this article, you will witness the development the cloud computing in three different phases. Okay without further ado, let’s start the article. The 3 Phases: Evolution of Cloud Computing Phase 1: Idea Phase The idea Phase can consist of the technical development before the pre-internet era. Let’s see how it all started. Distributed computing is where multiple independent systems are connected so that they seem to be a single entry to the users. It has the basic quality of cloud computing like scalability, concurrency, continuous availability, etc… However, the main problem is the need for the same Geographical location for all the computers. It is something that is still in use. The mainframe is a large computer that has high processing power and storage facility. Here instead of the multiple systems, one single supercomputer is depicted as the multiple systems in the user end. Though it gave larger Geographical coverage and computing power still we were hindered by the Geographical location. And this led to the next marvelous invention Cluster computing. However, till today Mainframe helps use in various online transactions and research data processing. Phase 2 – Pre-Cloud Phase This phase can be also called the Internet Phase. As in the 1960s, APRANET created the internet by connecting the four systems in different Geographical locations in America Like this Internet era started and so the new future of Cloud Technologies. Here each computer is connected through a network of high bandwidth. Thus the cost involved in the Mainframe is removed and it became the best alternative for it. Though the problem of the cost was solved, the geographical restrictions still prevailed as the internet was in the starting stage. In the 1990s the Grid Computing technologies came into action. Here the different systems are located in different locations of the world and connected through the internet. And each system was owned by different organizations. It solved the main problem of Distance but new problems emerged. For example, as the distance between the nodes increased it needed high bandwidth but that was not possible in all cases. However, Grid Computing laid a strong foundation for today’s Cloud Computing. And many people call Cloud computing the Successor of Grid computing. It refers to the techniques of distinguishing hardware into different visual layers. This allows the users to run multiple instances simultaneously in the hardware. This was introduced 40 years back, which is now the major idea used by major cloud computing providers like Amazon, Google, etc… Phase 3 – Cloud Phase After the integration of Grid Computing with Virtualization techniques we needed only better hardware resources and the internet. They were made available in the meantime thus we started Cloud developments. It plays an important role in Cloud Computing. Popular innovations in Web 2.0 like Google Maps, Twitter, Facebook, and other social media needed high storage. And this leads to the development of cloud storage techniques. And it created a new study called client-server management. The next thing is the introduction of SaaS. As many people started using personal computers and smartphones the scale of technologies and user scale rapidly increased in the 2000s. And this created a new business opportunity called Software as a Service shortly called SaaS. Then many SaaS companies started to provide cloud storage, infrastructure, and management service. It is termed utility computing. And then, today’s Cloud computing was brought into action. There are many types of cloud-like hybrid and simple models etc… No matter what we have just started. And there is a big future for Cloud computing with the introduction of machine learning and AI technologies. It is very hard to predict how this cloud computing is going to change the future.
<urn:uuid:abe0d511-27b9-4186-a8ad-d60aea4cbc7e>
CC-MAIN-2022-40
https://networkinterview.com/evolution-of-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00350.warc.gz
en
0.941871
857
3.3125
3
Technology to Secure Construction Site Safety? Construction and building sites are considered ‘a health and safety nightmare’,that 1,116 work-related accidents occurred over the period of 2011 to 2016, reported Department of Occupational Safety and Health (DOSH). Construction sector is one of the topmost industries that contributes to the country’s economy. However, construction site is a risky activity in which different parties engage in myriad challenges in one environment. The sector is associated with numerous accidents and fatal injuries caused by various factors, such as lack of supervision, lack of adherence to safe work technique, failure to wear personal protective equipment and failure to comply with the safe use of tools, vehicles, and machines. By analyzing secondary data from Malaysia Social Security Organization (SCOSO), found that 2,822 casual occupational injuries occurred in Malaysia with an average annual incidence of 9.2 fatal job-related injuries per 100,000 workers. Technology advancement is making the construction work safer by implementing safety practices in tackling workplace accident issues. Here are some ways technology helps in the construction sites. Drones for site survey Modern construction sites are larger in scope and complexity than ever before (and continue to grow), which makes it difficult to manage an entire site effectively. Site inspection can take days to finish. This causes the site inspection takes days to finish. In addition, every site contains safety hazards that can pose a danger to site inspectors. The use of drone in construction has changed the way buildings are made. AI-driven software has developed alongside drone technology. It contains powerful systems for processing a drone’s raw visual data to provide detailed maps of construction sites. Drones have evolved to the more consumer-friendly role of providing unique photographic and video perspectives. As a result of this evolution, drones are now critical to improving construction site safety. When equipped with infrared cameras and laser-based range finders, drones can conduct site surveys, look for safety hazards, and monitor construction workers, all while their human operators observe remotely from a safe location. Exoskeletons to reduce risk The nature of job exposes the construction workers to high risk of straining their bodies or suffering from muscoloskeletal disorders (MSDs). MSDs are the single largest category of workplace injuries and they are caused by lifting heavy objects, using heavy work tools, and using the wrong tools to do the job. With technology advancement, exoskeleton comes in to aid which enable construction workers can perform their tasks more safely. The exoskeleton or exosuits are metal frameworks with motorized muscle to multiply the construction workers’ strength and improve their posture. It allows the workers to lift up to 200 pounds safely. There are two types of exoskeletons —power assisted and unpowered. The unpowered exoskeletons use a mechanical harness wrapped around the construction workers’ body to improve ergonomics and reduce fatigue. The employers may improve worker safety and prevent them to churn. The power assisted exosuits are mobile machined which are powered by system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. It is designed to provide support to shoulders, waist and thigh, and lower the stress in lifting and lowering heavy items. VR for realistic safety training The construction site is abundant with potential health and safety hazards. This is why the constructor sector is looking towards virtual reality for safety training needs. Virtual reality (VR) stimulates real-world situations that allows the workers to immerse in the environment that resembles real construction environment. VR technology is applicable in the construction sector with unified endpoint management solution (UEM) which helps the constructors to manage and control internet enabled devices from a single interface. It enables the workers to experience situations that they will not be able to easily construct for training in a real-life situation which can test them at a progressive, higher level and a more difficult standard. The VR training consists of navigating tight corners, understanding the use of directional arrows, complex basket positioning, and avoiding hazardous placement which scenarios can be tailored to match the construction site. Also, the health condition can be supervised by monitoring their heart rate and stress level.
<urn:uuid:966e732c-354f-4c4d-ae3e-33d5e9483d5a>
CC-MAIN-2022-40
https://syndes.biz/technology-to-secure-construction-site-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00350.warc.gz
en
0.93217
869
2.59375
3
We often hear about the benefits artificial intelligence (AI) can bring to medicine and healthcare through drug research, but could it also pose a threat? Researchers from Collaborations Pharmaceuticals, a North Carolina-based drug discovery company, have published a paper that highlights the dangerous potential of AI and machine learning to discover biochemical weapons. By simply tweaking a machine learning model called MegaSyn to reward instead of penalise predicted toxicity, their AI was able to generate 40,000 biochemical weapons in six hours. Obvious in hindsight? Worryingly, the researchers admitted to never having considered the risks of misuse involved in designing molecules. “The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it,” the paper noted. Even the company’s work on Ebola and neurotoxins did not alert them to the damage that could be caused by flipping their models to seek out rather than avoid toxicity. From generation to synthesis The barriers to misusing machine learning models like MegaSyn to design harmful molecules are lower than you might expect. Plenty of open-source software has similar capabilities and the datasets that trained it are available to the public. What’s more, the 40,000 toxins were generated on a 2015 Apple Mac laptop. Of these, hundreds were found that are more lethal than the nerve agent VX. One of the most potent chemical warfare agents of the twentieth century, VX uses the same mechanism to paralyse the nervous system as the Novichok nerve agent used in the 2018 Salisbury poisonings. Fortunately, actually synthesising these potential new bioweapons is far more of a challenge than generating them on a computer. The specific molecules that are needed to create VX, for example, are strictly regulated. Dangers would only arise if a toxin was found that did not require any regulated substances. Whilst easy to figure out through another set of parameters, the researchers felt uncomfortable taking this extra step. Before publication, Collaborations Pharmaceuticals presented their findings at the Spiez Laboratory, one of five labs in the world that is permanently certified by the Organisation for the Prohibition of Chemical Weapons (OPCW). The researchers’ findings make an important case for the need to oversee AI models and fully consider the ramifications of utilising complex AI. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022. Explore other upcoming enterprise technology events and webinars powered by TechForge here.
<urn:uuid:de0270c7-8372-4465-b70a-2682e1079f47>
CC-MAIN-2022-40
https://www.artificialintelligence-news.com/2022/03/23/ai-machine-learning-biochemical-weapons/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00550.warc.gz
en
0.953832
616
3.234375
3
Urban Design, Green Spaces, and Video Analysis Video Analytics for Urban Environmental Health & Wellbeing We all want to live in thriving communities and – by today’s metropolitan standards – this often means accessible, safe, and – increasingly – sustainable spaces. Green spaces, or designated natural oases in otherwise developed urban areas, have, therefore, become a major driver of city sustainability. Adding and expanding green spaces has become a key strategy for attracting new and retaining current residents and increasing their quality of life. While the importance of green spaces is clear to urban designers, they face challenges when it comes to making decisions about where these spaces belong and their size and scope. In smart cities, strategic decisions are data-driven and technologies from the fields of IoT, connectivity, and analytics are key investments that help overcome these challenges. As with any problem, urban planners can approach this hurdle by considering how they can maximize their existing investments for additional value across municipal business units. Many cities have already invested in video surveillance for law enforcement, security monitoring, and post-incident investigations. By enhancing these resources with intelligent video analytics, urban planners can identify the optimal locations for green spaces and other critical infrastructure, justify investments based on quantifiable and qualitative data, and measure the impact of urban design enhancements with ongoing data analysis. Green Spaces in Smart Cities Parks and green spaces increase a city’s appeal to potential and existing residents. In fact, the World Health Organization considers access to green spaces to be an important factor in human health and wellbeing. An important component of any community, green spaces are areas for residents to gather, exercise, and connect with nature. Smart Cities that are committed to leveraging Big Data and technology to increase efficiencies, improve residential wellbeing, optimize municipal operations, and secure the city, can also address urban design and green space planning with their technology investments, such as video analytics. Intelligent video analysis enables cities to leverage surveillance video as data, which was previously underutilized. By making sense of unstructured video metadata through machine-learning techniques, video analysis technology empowers cities to unlock the value of their video surveillance to enhance public safety and advance sustainability. By extracting a wealth of operational intelligence from video surveillance intelligence, urban planners can track public transit usage patterns, vehicular and pedestrian traffic trends, and when to expect deviations from normal behavior. By visualizing video metadata to uncover how city spaces and resources are used, urban planners can make data-driven decisions about the optimal placement for city infrastructure, public transit access points, and even green spaces. For instance, cities could use heatmap and traffic data to understand how visitors are using existing green spaces. Frequent crowding could indicate the need to expand a public park. Conversely, the analysis might reveal that a green space is being underutilized, which may necessitate adding public transit infrastructure or parking to make the area more accessible and convenient for visitors. Understanding the quantity of children that pass through a green space might help inform decisions about planning playgrounds and toy structures. This and additional information enables intelligent decision-making when it comes to planning transit routes and accessibility, developing nearby commercial spaces and attracting businesses relevant to the visiting population, as well as identifying which park amenities are and are not attractive to visitors (and when). Reducing Waste and Increasing Sustainability A key priority for many cities is making sure that city resources are being utilized fully and efficiently and reducing waste. Video analytics can support this objective, as well, by providing urban planners with insight into city traffic and infrastructure usage. By understanding when and where pedestrian movement is expected throughout a day or week, cities can make smarter decisions about lighting and ensure that lighting is only activated at times when areas are in use. Conversely, the city might identify heavily trafficked areas are not well-lit at night and increase safety by making changes to the illumination patterns. Optimizing Traffic Flows with Video Intelligence Video analytics powerfully provide actionable intelligence to stakeholders across the municipal government. In addition to designers focused on developing green spaces and expanding sustainability initiatives, urban transit departments can also leverage video intelligence for data-driven traffic optimization. By improving traffic flows in a city – whether it’s pedestrian, bicycle, or vehicular traffic, as well as public transit infrastructure – city transit authorities can positively impact both the environment and the efficiency of navigating the city with video intelligence. By identifying where and when residents and visitors are more likely to walk than leverage a car, public transit, or a bike or scooter, city planners can make more pointed decisions about pedestrian walkways, crosswalks, and traffic lights. By tracking bicycle or scooter traffic in the city, planners can understand where bike lanes would be impactful and create the infrastructure needed to attract residents to opt for biking over driving – having a long-term impact on the environment. Similarly, the transit authority can leverage video data to detect inefficiencies and bottlenecks in the bus or train schedules and formulate better timetables based on residents’ needs. Better public transport and accessible pedestrian and biking options can minimize the allure of driving in a city, but even drivers can feel the benefits of city planning driven by video insights. Traffic planners can utilize video data for optimizing traffic flows, identifying the need for parking lots and traffic infrastructure, and proactively planning for expected changes in traffic flows. Planning for the Smart Future Today Along with other core smart city technologies, video analytics is supporting urban planners and city officials in creating a cleaner, more sustainable future. Data-driven decision making empowers cities to improve resident and visitor wellbeing, while proactively preparing for population growth, tourism, and economic development. Signup to receive a monthly blog digest.
<urn:uuid:e96a5367-ab3c-4b87-9297-f4255fc555f9>
CC-MAIN-2022-40
https://www.briefcam.com/resources/blog/urban-design-green-spaces-and-video-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00550.warc.gz
en
0.918679
1,164
2.734375
3
Upon completion of this chapter, you will be able to answer the following questions: What are some features of examples of cybersecurity incidents? What are the motivations of the threat actors behind specific security incidents? What is the potential impact of network security attacks? What is the mission of the Security Operations Center (SOC)? What are some resources available to prepare for a career in cybersecurity operations?
<urn:uuid:6c09639f-72a0-4078-b41c-c197d3ac1729>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2928195&amp;seqNum=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00550.warc.gz
en
0.937909
84
2.90625
3