Unnamed: 0 int64 0 3k | title stringlengths 4 200 | text stringlengths 21 100k | url stringlengths 45 535 | authors stringlengths 2 56 | timestamp stringlengths 19 32 | tags stringlengths 14 131 |
|---|---|---|---|---|---|---|
1,400 | How to Use Google Cloud and GPU Build Simple Deep Learning Environment | How to Use Google Cloud and GPU Build Simple Deep Learning Environment
Create a Deep learning VM instance in Google Cloud Platform, install and set up Jupyter Notebook, and the Nvidia CUDA toolkit.
Photo by Infralist.com on Unsplash
Google Cloud Platform provides us with a wealth of resources to support data science, deep learning, and AI projects. Now all we need to care about is how to design and train models, and the platform manages the rest tasks.
In current pandemic environment, the entire process of an AI project from design, coding to deployment, can be done remotely on the Cloud Platform.
I will demonstrate how to use Google Cloud Platform with GPU to build a deep learning environment from below four steps:
Create a VM instance with GPU Set up Networking Install Juypter Notebook Install Nvidia CUDA
Step 1: Create and Starting a VM instance
Increase GPUs Quota
IMPORTANT: If you get the following notification when you create a VM that contains GPUs. You need to increase your GPU quota.
From top left menu, select IAM & Admin -> Quotas
In filter table, select Limit name and GPUs(all regions)
GPUs Quota Page
It will list your GPUs quota information, click ALL QUOTAS into Quota metric details page.
Checked Global option and click EDIT QUOTAS.
Enter the number of GPU quotas you need in New limit, I entered 2 here. Finally, click SUBMIT REQUEST.
Edit GPUs Quota Limits
Quota increase requests typically take two business days to process. Google will update progress information via email.
Create a VM instance
In the Google Cloud Platform Console, go to the Home ->Computer Engine -> VM instances page Select your project and click Continue. Click the Create button. Specify a Name for your instance. Change the Region and Zone for this instance. I choose us-west1(oregon) and us-west1-b . Select a Machine configuration for your instance. In Series section, we choose N1 machine types that power by Intel Skylake CPU platform. There’s another N2 machine type, and while it uses the higher performance Intel Cascade Lake CPU palreform, the N2 do not support GPUs now, so we choose to use N1 . In Machine type section, we choose n2-standard-8(8vCPU, 32GB memory) . Expand the CPU platform and GPU at the bottom of the Machine configuration and add the GPU for your VM. Click Add GPU, Choose GPU type and Number of GPUs. To demonstrate I have chosen 2 pieces of NVIDIA Tesla K80 here.
IMPORTANT: If you get the error Quota 'GPUS_ALL_REGIONS' exceed. Limit: 0.0 globally when creating a VM, you need to request a quota increase for your GPUs. Please see the previous section Increase GPUs Quota. In the Boot disk section, click Change to configure your boot disk.
For Deep learning project, I choose Deep Learning on Linux in Operating System section.
In Version section, we chose GPU Optimized Debian m32 (with CUDA 10.0) which is A Debian 9 based image with CUDA/CuDNN/NCCL pre-installed.
In Size section, we need to choose at least 300 GB. In Access scopes section, we check Allow full access to all Cloud APIs , because we need access to Google Bucket and other Cloud APIs. In Firewall section, we check both Allow HTTP traffic and All HTTPS traffic so that we can access the Jupyter Notebook from external network. Click on Management,security,disks,networking,sole tenancy and chose the Disks tab. In Deletion rule, unchecked Delete boot disk when instance is deleted . When you accidentally delete the VM instance, it will not delete the boot disk. Other sections can use the default value. Click CREATE, after waiting a few minutes, your VM is available.
Add GPU
Choose GPUs
Boot disk Section
Firewall Setting
A VM instance created
STEP 2: Set up Networking
In order to be able to access your Jupyter Note from the external network, you need to set up a static IP and a firewall rule in your new VM.
External IP addresses
Go to Navigation menu -> NETWORKING -> VPC network -> External IP addresses Change your VM External address’s Type from Ephemeral to Static, and add Name for the new static IP address.
Firewall
Go to VPC network -> Firewall, Click CREATE FIREWALL to create a firewall rule for your Jupyte Note. In Create a firewall rule page, input Name of the rule, change Targets to All instances in the network . In Source IP ranges, input 0.0.0.0/0 . In Protocols and ports, checked Specified protocols and ports and set tcp port to 5000 or other port number. Leave others as default and click CREATE.
Create a firewall rule for Jupyter note
A jupyernote rule created
STEP 3: Install Jupyter Notebook
Back to your VM instances page and Activate Cloud Shell on the top right.
Install Jupyter Notebook using pip
pip install jupyter
If you get an error message about permission denied , use the command below.
pip install jupyter --user
Generate a configuration file:
$ jupyter notebook --generate-config
Go to Jupyter installation directory
$ cd ~/.jupyter/ $ vim jupyter_notebook_config.py
Open jupyter_notebook_config.py , and add the following to the end of the file. Make sure you replace the port number with the one you allowed firewall access to above.
c = get_config()
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False
c.NotebookApp.port = 5000
Launching Jupyter Notebook.Using the following command in your VM SSH window:
$ jupyter notebook
As shown below, your Jupyter Notebook is already running.
Open your browser and input the following address:
http://<External Static IP Addrsss>:<Port NUmber>
Sometimes you need to enter a token, copy the token given in the command line as shown above.
Once everything is set up, your Jupyter Notebook looks like this.
STEP 4: Install NVIDIA CUDA
If you use Deep Learning On Linux public images like me, then Google cloud will pre-installed PyTorch, Tensorflow, CUDA etc.
Sometimes, to avoid unknown errors, I recommend you manually install CUDA.
Download CUDA
Tensorflow and PyTorch only support CUDA 10, we need to download CUDA Toolkit 10.1.
Choose the right target platform based on your Machine information.
IMPORTANT: In Installer Type section, we choose runfile(local) .
Copy the installation command.
IMPORTANT: To avoid GUI errors, we should add the --no-opengl-libs option to prevent the openGL libraries from being installed.
Run the command below. | https://medium.com/swlh/how-to-use-google-cloud-and-gpu-build-simple-deep-learning-environment-c6eadff2a569 | ['Jason Zhang'] | 2020-11-20 10:02:16.149000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'Data Science', 'Programming'] |
1,401 | How can you create your own OTT app | There’s no doubt that launching an OTT platform is a smart business idea, especially considering the scenario today. The number of users using OTT platforms was reported to be over 1,900 million in 2019, a number that is estimated to increase to 2,500 million by 2024. Clearly, demand and consumption are only going to increase, making it the right time to roll out an over-the-top (OTT) video application.
The important question is how to build an application that captures the market and competes with existing platforms. There are 5 key components that you must consider, along with a robust UI design and a high performing back-end video hosting and delivery platform.
Launching Your Own OTT Platform — Key Components
1. Hosting
You have a choice between hosting the platform on your own server or a cloud solution. The hosting solution (the server and underlying specs) will play a key role in the performance of the application.
While hosting it on your own server gives you complete control and more flexibility, it also requires round the clock technical support. Hosting on a cloud solution can be a cheaper and safer alternative. It’s important to remember that you’re not just hosting the application but also the videos that will be delivered, and the quality of streaming will depend heavily on the hosting servers.
2. Content Delivery Network
A CDN makes streaming and data transmission flawless for users across the globe by serving data from a server closest to the user. A CDN is especially crucial for OTT platforms because users that access your platform will be spread across the globe and expect seamless experience by default.
3. Multi-Channel Streaming Protocol
Different protocols are available for streaming platforms: HTTP Live Streaming (HLS), Real-Time Messaging Protocol (RTMP) that uses dedicated streaming servers, protocols that support low-latency streaming like Common Media Application Format (CMAF) and Apple’s Low-Latency HLS, etc.
Selecting the right protocol will depend on the type of OTT platform you are looking to build.
4. Transcoder
A Transcoder encodes and decodes streams to convert videos on your platform into compatible versions for user devices. Added features, such as auto-adjusting the quality of multi-bitrate streams, enhance the viewing experience.
5. Cross-Platform Compatibility
By building a platform that is cross-compatible on TV, mobile, tablets, etc., you increase your audience and revenue potential. Users are actively seeking applications that work seamlessly on multiple devices with instant data sync. As such, your OTT platform should be designed for this need.
A smart way to launch your own OTT platform and also enhance video quality is to use an out-of-the-box white-label OTT solution. This drastically reduces development cost and time, and such solutions encompass all these critical components within one platform.
Mogi I/O, for example, is one such white-label OTT solution. Mogi I/O provides a ready-to-use front-end OTT interface that you can personalize with your company branding, and a back-end video hosting and delivery service. Features like content delivery via CDNs, AI-based image quality enhancement, and AI-based transcoding with video compression greatly enhance the quality of your videos, improving user experience. With such a solution, you can launch an OTT platform quickly and focus on content creation while letting the platform handle the rest.
Building Your Own OTT Platform
There are 3 ways to develop your own OTT platform:
Develop it in-house. Outsource to a development company. Host on a ready-to-use OTT platform.
1. Develop the App In-House
The first method is to design and develop the application in-house with a team of designers and developers. You need to consider the 5 components listed above when building the app or opt for a leaner and more efficient option like Mogi I/O for video hosting and delivery.
Building the app in-house might cost you more and place the responsibility of development, maintenance, and support on your team, but it will give you complete control of the application.
2. Outsource to a Development Company
The second option is to outsource design and development tasks to a development company. While you still maintain control of the functionality and features, you outsource the manual tasks and responsibility. This is a great option for businesses without an internal tech team.
The pros of outsourcing are that you can get a team of experts to develop a high performing app and leverage smart video delivery platforms to launch an excellent OTT platform. The drawback is that you won’t have complete visibility into the product.
3. Use Out-of-the-Box OTT Platforms
The final method is using ready-to-use OTT applications and is the best option of the three. Platforms like Mogi I/O provide plug-and-play white-label OTT solutions that you can use to launch your own branded app on android and iOS in as less as 24 hours.
Mogi I/O provides an all-in-one solution for the front-end UI and the backend content management portal. The ready-to-use front-end interface can be acquired and tailored to suit your company brand while storing content on the back-end CMS equipped with video tech infra layer for transcoding, enhancing video quality, compression, storage & streaming.
The platform takes care of all the technical aspects of launching an OTT platform — development, hosting, CDN delivery, transcoding, etc, and also enhances video quality when streaming. The advantage here is that you don’t have to worry about development or hosting and can focus solely on content creation.
Final Word
The consumption of video content is at an all-time high and will only go up as time progresses. While it may seem like popular video streaming platforms such as Netflix, Amazon Prime, and Hulu dominate the market, newer OTT platforms are still capturing audiences, especially with competitive pricing and fresh content. There’s never been a better time to launch your own OTT application. | https://medium.com/@vikrant_98774/how-can-you-create-your-own-ott-app-a39342ccbfa8 | ['Vikrant Khanna'] | 2020-08-20 05:01:35.247000+00:00 | ['Cdn', 'OTT', 'Streaming Video', 'Technology News', 'Aı'] |
1,402 | Trends in Telehealth 2020 | Telemedicine started more than 50 years ago. Today, virtual healthcare has only become more substantial, with millions of people seeking physician’s help through mobile apps, websites interfaces, and through calls and emails. In 2020, three trends will define telemedicine:
Increased Adoption
For many years, hospitals have employed telemedicine only in unique medical cases. However, technology is now part of medical practice. According to a study by FAIR Health, non-hospital telemedicine grew by 1,393 percent between 2014 and 2018. These cases involved patients recovering from illnesses having video calls with remote physicians. Most people seek the help of a doctor remotely for illnesses such as the common cold and skin rashes, which are not so fatal.
Today, more than 15 percent of physicians in the US work in practices where telemedicine takes a larger share of the day to day activities. In 2020 and beyond, more people will adopt telehealth.
Better Pay for Physicians
One of the hurdles in telehealth is low reimbursement. Both private and public players in the telehealth industry realize the immense savings that come with accessing healthcare remotely. According to one study, an individual saves between $19 and $121 for every visit.
Today, up to 40 states in the US have reimbursement-friendly telehealth parity laws. With such laws, people no longer view telehealth as a secondary option but as a standard way of receiving medical care.
Increased Apps and Systems to Help Vulnerable Populations
Today, telehealth helps more millennials than the elderly and the vulnerable. However, with better systems and advanced apps, the elderly and vulnerable populations can access telehealth.
The younger population can use any digital technology, including emails, apps, phone calls, and much more. The elderly and people with chronic conditions comprise up to half of the American population. This group can benefit more from telehealth than millennials.
Medical providers seek new ways to provide healthcare for the disadvantaged; telehealth is one of the ways that specialists consider.
Other Trends
With increased telehealth adoption, the industry will see other trends. These include increased home messaging devices, growth in clinical tools such as blood pressure monitors with patients, monitoring center links, and telemonitoring devices for vulnerable populations. In 2020, doctors and patients will become more accustomed to the use of these tools. The use of these devices will cut healthcare costs for patients who are chronically ill. | https://medium.com/@roger-blake-md/trends-in-telehealth-2020-822ff7518096 | ['Roger Blake Md'] | 2020-06-15 20:51:42.985000+00:00 | ['Healthcare', 'Medtech', 'Roger Blake Md', 'Technology', 'Medical'] |
1,403 | Let’s develop an Ecommerce Application from Scratch using Java and Spring | Overview of our Backend Application
In this Spring Application following are important packages that you have to know before starting.
This is the spring architecture. The outside world calls the REST Apis, which interacts with the Service. Service calls the repository. The repository interacts with the database. We follow this pattern to make the codebase maintainable, instead of having spaghetti code which can be a nightmare in long term.
Let's look at first Rest controllers
Controller
The User Controller class provides two HTTP methods GET and Post. The Get mapping function return a complete list of Users and the Post Mapping Function saves the new user profile in the Database.
As we can see UserControllers has a reference to UserService. | https://medium.com/javarevisited/lets-develop-an-ecommerce-application-from-scratch-using-java-and-spring-6dfac6ce5a9f | ['Nil Madhab'] | 2021-03-24 05:07:26.538000+00:00 | ['Java', 'Spring Boot', 'Backend Development', 'Ecommerce', 'Technology'] |
1,404 | THE ADVANTAGES OF USING CUDO FOR A HARDWARE OWNER | If you are connected to the Internet in any way right now, you are a hardware owner. This hardware may be a phone, a laptop or gaming equipment. There is also a chance that you are not making optimal use of its capacity and computing power.
You have probably also heard about Blockchain, the technology behind Bitcoin. Blockchain technology has helped a lot of businesses become decentralized, and this could probably be the way businesses will run in the nearest future. Decentralization denies platforms a single point of failure, as all information of the platform are stored in bits in different systems. When done rightly, decentralization also helps platforms become hackerproof.
Cloud computing services have always been about companies having large servers to cater to the growing compute needs of people worldwide. The more these services are needed, the more servers are mounted. These essentially implies that information and data for a lot of people (think millions) are contained in one single server, which, although can be rare, may become compromised. The potential effects of this compromise can be catastrophic. These sorts of services are known as centralized computing services.
Decentralized computing services on the other hand, offers similar services, only this time, the information is stored in several servers round the world. Many decentralized computing platforms try to have their user information stored in bits in servers that are not in one place. For this, many utilize their user hardware. Information/Projects are now stored/executed on individual hardware instead of a single server. Not only does it ensure safety of their users’ data, it also ensures that the suppliers of this hardware get to receive incentives for donating their equipment.
We now have a lot of platforms that offer decentralized cloud computing services, and some of them are becoming widely known. Although the issues of scalability and complexities in the mode of blockchain based services still abound, some platforms are already simplifying the process and worked actively on resolving the issue of scalability.
For the context of this article, we are taking a look at Cudo network, a decentralized platform that utilizes their users’ hardware to provide cloud computing services worldwide.
CUDO is a decentralized platform that serves as an oracle for other blockchains/blockchain based applications while also offering distributed cloud computing services. As a global compute network, the Cudo network offers two major services;
1) To provide a secure Turing complete oracle layer to blockchains, enabling any kind of workload requested on the blockchain to be executed in a record short time while being fully decentralized.
2) To be a cost effective alternative to the main cloud service providers available today.
Cudo serves three unique set of consumers-
i. The blockchain developer
ii. Cloud computing service users
iii. The Hardware owner
The hardware owners are the focus of this article, as there are a lot of persons worldwide with hardware that can benefit from using Cudo.
Passive Income Generation: Cudo is an application that can generate passive income for any owner with unused hardware. You can generate from as low as 50dollars a month to 400dollars a month depending on the kind of hardware in your care. You don’t have to do anything, so far the hardware is powered and the Cudo software is installed.
Ease of use: The Cudo application is easy to use and install. With a few clicks, you can easily have the application running on your computer/gaming equipment. The software is available for Linus, Windows and Mac devices. Once you install it, you are directed on the next few clicks, and then the remaining processes become automated.
During active hours of the user’s hardware, the Cudo software is idle, so that the user can make optimal use of his/her hardware without having many applications slowing the work activity down. When the hardware becomes idle, the Cudo software starts running automatically, generating income for the user.
Price Setting: With Cudo, renting out your unused hardware has never been easier. You also get to set your own price for your hardware. The Cudo engine matches you with users who need the capabilities of your device, whilst also suggesting prices for you depending on demand for such hardware. When there is a higher demand, the prices go up, benefitting the owner of the device.
Mining Cryptocurrencies: Cudo automatically switches your equipment to mining cryptocurrencies when there are no cloud computing works to be done, or when mining becomes more profitable compared to cloud computing services. You don’t have to do anything for this switch to happen, and the switch would also take place if there are greater mining rewards for some coins than others. The rewards for this mining will be sent to your Cudo wallet, and you can withdraw it to an exchange and get fiat.
Donate to Charity: You can now donate your hardware to be used to mine and the proceeds generated fully donated to charity. With Cudo, You don’t have to worry about not having fiat to give to charity. Please visit www.cudodonate.com to get more information on how this works.
Be a part of something great: Cudo already has over 100,000 users worldwide that use and trust their products. There is no known platform that has provision for alternatives between mining and cloud computing services except Cudo. Going through www.cudoventures.com, you discover user friendly tools that automate the process of reselling unused computing power. These are services that are unique to Cudo, and as a hardware owner, there is no reason why you shouldn’t be part of this great platform.
*For developers and cloud service users, there are enormous benefits to be gained by using Cudo. You can chose your level of decentralization as a developer, and you can also list applications to be sold/used on the Cudo marketplace. As a cloud service user, you get a cost effective service with a team that is ready to personally respond to all your questions and concerns, a team with experience. You also get to join a wide range of companies that are now partners with Cudo and benefitting from their cloud computing services, while also chosing the level of privacy you desire for each project you execute through the Cudo platform.
You can join users from over 100 countries and become part of the Cudo family by visiting www.cudos.org | https://medium.com/@hazelia/the-advantages-of-using-cudo-for-a-hardware-owner-878bb04f532e | ['Hazel C'] | 2020-12-27 10:20:43.884000+00:00 | ['Blockchain', 'Blockchain Technology', 'Blockchain Startup', 'Cudos', 'Blockchain Application'] |
1,405 | Common Vue Problems — Port Number, this, Global Variables, and JSON | Photo by David Tostado on Unsplash
Vue.js makes developing front end apps easy. However, there are still chances that we’ll run into problems.
In this article, we’ll look at some common issues and see how to solve them.
How to Change Port Number in Vue CLI Project
We can change the port number that the Vue dev server listens to.
The config file for it is in /config/index.js .
We can change the port property value to change the port number.
To change it temporarily in a Vue CLI 3.x project.
We can set the --port option.
For instance, we can write:
npm run serve -- --port 3000
Alternatively, we can change it in the .env file with the PORT option.
For example, we can write:
PORT=8888
Then the app will be served from port 8888 locally.
Access External JSON in a Vue App
To access JSON in our component code, we can import the JSON file directly.
For instance, we can write:
<script>
import json from './json/data.json'
export default{
data(){
return{
json
}
}
}
</script>
We just import the JSON data file into the code.
Then we can use it anywhere as we do with any other JavaScript object.
The imported JSON data will now be reactive like any other piece of data.
Refreshing a Token with a Refresh Token in Axios
We can refresh a token with a refresh token in Axios by using an interceptor.
The interceptor is run recursively to recreate the interceptor with the new token.
For instance, we can write:
createInterceptor() {
const interceptor = axios.interceptors.response.use(
response => response,
error => {
if (errorResponse.status !== 401) {
return Promise.reject(error);
} axios.interceptors.response.eject(interceptor); return axios.post('/api/refresh_token', {
'refresh_token': this._getToken('refresh_token')
}).then(response => {
saveToken();
error.response.config.headers['Authorization'] = response.data.access_token;
return axios(error.response.config);
}).catch(error => {
destroyToken();
this.router.push('/login');
return Promise.reject(error);
}).finally(createInterceptor);
}
);
}
If an error is 401, then we reject the promise.
Then we remove the old interceptor with eject .
Next, we make a post request to get a new token with the refresh token.
Then we set the token as the value of the Authorization header.
We also save the token with saveToken .
If we failed to get the auth token with the refresh token, then we reject the promise with the error and redirect to the login route with Vue Router.
Then we called createInterceptor again.
Apply Global Variables to a Vue App
We can apply global variables in a Vue app by using the Vue.mixin method.
For instance, we can write:
Vue.mixin({
data (){
return {
hello: 'world'
}
}
})
Then the hello property is available everywhere.
We can do the same thing with any other method or property.
Now we can use this.hello .
To make it read-only, we can make it a getter with get .
For instance, we can write:
Vue.mixin({
data (){
return {
get hello(){
return 'world';
}
}
}
})
Now we can’t set a new value for this.hello .
Also, we can add a new property to Vue.prototype .
For instance, we can write:
Vue.prototype.$appName = 'example app';
Then we create a new Vue instance or a new component, we can access it by writing this.$appName .
Using this in Vue Components
To use this in callbacks that are in Vue Components, we use arrow functions or we can set the value of this to a constant.
For example, we can write:
getData() {
const self = this;
axios.post('/api/users', ...)
.then(function(response){
self.users = response
})
}
We assigned this to self before we use it in the callback because the callback is a traditional function.
In a traditional function, this has its own value, which would be the function itself.
self would reference the component since we assigned it to this outside.
To make our lives easier, we can use arrow functions:
getData() {
axios.post('/api/users', ...)
.then((response) => {
this.users = response
})
}
Photo by Jamie Street on Unsplash
Conclusion
We can change the port number in a Vue CLI project by changing various config files or command-line flags.
To add global variables to a Vue app, we can use mixins or add a property to a Vue ‘s prototype.
To use the correct value of this in a callback, we can use arrow functions or assign this to a variable outside the callback.
We can import JSON files and use them like JavaScript objects. | https://medium.com/dataseries/common-vue-problems-port-number-this-global-variables-and-json-5961fc65e834 | ['John Au-Yeung'] | 2020-07-03 08:11:00.779000+00:00 | ['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development'] |
1,406 | Ultimate Guide to Python's Matplotlib: A Library Used to Plot Charts | Ultimate Guide to Python's Matplotlib: A Library Used to Plot Charts
Photo by Cookie the Pom on Unsplash
Data visualization refers to the graphical or visual representation of data and information using elements like charts, graphs and maps, etc. Over the years, data visualization has gained immense popularity as it provides an easy interpretation of even massive amounts of data by displaying it in the form of patterns, trends and so on. By following these patterns and trends a user can facilitate his decision making.
Python uses the Matplotlib library's pyplot for data visualization. Pyplot is a collection of methods within the Matplotlib library, which can be used to create 2D charts, graphs and represent data interactively and effectively. The Matplotlib library is preinstalled with Anaconda distribution or can also be installed easily from the internet.
Installing Matplotlib
1. If you have Anaconda navigator, open the navigator window, click environments and scroll down to find Matplotlib library. It is preinstalled on your computer.
2. If you don't have Anaconda navigator, that isn't a problem. Just go to https://pypi.org/project/matplotlib/#files
Here you will find the library. Download and install it, and you are ready to create wonderful charts and graphs in python itself.
Types of charts offered by Matplotlib
It offers a wide range of charts of which the most prominently used ones are listed below:
1. Line Chart
It connects important points called 'markers' through the use of straight-line segments. These points will represent the data that you will enter while making the chart.
2. Bar Chart
It uses bars to represent the data. The height of the bars is variegated to depict the differences in the given data. Bar charts can be plotted horizontally as well as vertically depending upon the need of the user.
3. Pie Chart
Slices of a circular area are used to depict the data. The slice with larger area represents a higher value, whereas a smaller one is represented by less area.
4. Scatter plot
Scatter chart just plots the data in the form of dots. It differs from the line chart by not joining the dots using straight lines.
Now, let's move on to the steps to create these charts.
Note: You will have to give the command to import Matplotlib before you set out to create charts. For this just type the below-mentioned command in your Jupyter or python window:
import matplotlib.pyplot as pl
This will import Matplotlib to your window and you will have to just use ‘pl' in place of the long ‘matplotlib.pyplot' every time you create your charts.
Line Charts
To create a line chart you must assign some data beforehand. This data can be given in the form of lists, or dictionaries in python. Here I will use lists to create charts:
import matplotlib.pyplot as pl a= [ 1, 2, 3, 4] b= [2, 4, 6, 8] pl.plot(a,b) pl.show()
Here a and b were lists that were created consisting of values 1,2,3,4 and 2,4,6,8 respectively. The command pl.plot(a,b) was given to plot a line chart using values in ‘a' as the x-axis and ‘b' as the y axis.
Here is the plotted chart:
Image source: Author
You can also give names to the x and y-axis as follows:
pl.xlabel(“ values in a”) pl.ylabel(“values in b") pl.plot(a,b) pl.show()
Here the x-axis will be named as ‘values in a' and y-axis as ‘values in b'.
Bar Charts
Let’s move on to drawing bar charts. Bar charts also require the same steps as the line chart. The only difference arrives while giving the command to plot the data.
import matplotlib.pyplot as pl a= [ 1, 2, 3, 4] b= [2, 4, 6, 8] pl.bar(a,b) pl.show()
While giving the command to plot a bar chart we need to specify ‘bar’ for the same, as we have done above.
Image source: Author
To name the x and y-axis the same procedure can be followed.
pl.xlabel(“ values in a”) pl.ylabel(“values in b") pl.bar(a,b) pl.show()
The width of the bars can also be altered using the ‘width’ command. The value given in the width command should be numeric, otherwise, Python will raise an error.
pl.bar(a,b, width=<value>)
Scatter Charts
Scatter charts allow you to change the way it’s data points or markers look by specifying the marker size and marker type. In this type of chart, it is compulsory for you to specify any of the two or both while giving the command to create the same.
a= [ 1, 2, 3, 4] b= [2, 4, 6, 8] pl.plot(a,b, “o", markersize=10) pl.show()
Here the data points would like ‘o' letter and would have a marker size equal to 10. If we don’t specify them then instead of the scatter chart, a line chart would be plotted.
Image source: Author
Changing the x label and y label would remain the same for the scatter charts as well.
pl.xlabel(“ values in a”) pl.ylabel(“values in b") pl.plot(a,b, “o", markersize=10) pl.show()
Pie charts
Contrary to other charts, a pie chart can also function with just one of the list. But for clarity in the understanding of the readers as well as users we specify the labels for each slice, which requires the use of the second list.
a= [‘Sam’, ‘Tina’, ‘Joe’, ‘Mark’] b= [100, 200, 300, 400] pl.pie(b, labels=a) pl.show()
The list represents the contributions made by 4 members to organize a party. The different members will be depicted by different colour as follows:
Image source: Author
You can also give a title to your pie chart: | https://medium.datadriveninvestor.com/ultimate-guide-to-pythons-matplotlib-a-library-used-to-plot-charts-3d2210ccb04c | ['Niyati Jain'] | 2020-12-14 18:04:54.883000+00:00 | ['Programming', 'Computer Science', 'Design', 'Digital Life', 'Technology'] |
1,407 | Let’s Learn About Graph Databases | From Punchcards to Relational Databases
In preliminary days, data was stored on punchcards and was really hard to read or interpret. It was impossible to index data to cross-reference and eliminate inconsistencies.
Punched card from Fortran program — Wikipedia
But soon, the industry evolved and RDBMS came into picture where data was stored in relational databases.
This format is somewhat human-readable but to store a considerable amount of data, relational databases require normalization to remove duplications and inconsistencies.
It also requires foreign key relationships to related data which makes it hard to understand and maintain without complicated JOIN queries.
ER diagram of a Car Rental System
ACID: atomicity, consistency, isolation, and durability
Relational databases support ACID out of the box and that is a big advantage.
It means that if the data is committed once, it will be available for subsequent queries to use but it is expensive to find data and the cost keeps creeping up as the size of data keeps growing.
To fix this issue, we add indexes to data which make lookup faster.
Adding indexes solves the issue to a certain extent but if we have to do a number of JOINs then we have to perform query time index lookups for each and every JOIN — this approach starts falling apart and starts becoming expensive as the number of tables grows. | https://medium.com/better-programming/graph-databases-ad2cbe1570df | ['Navdeep Singh'] | 2019-11-27 23:58:53.672000+00:00 | ['Development', 'Technology', 'GraphQL', 'Database', 'Programming'] |
1,408 | Vuex 4 — Modules Namespace. Vuex 4 is in beta and it’s subject to… | Photo by Daniel Jerez on Unsplash
Vuex 4 is in beta and it’s subject to change.
Vuex is a popular state management library for Vue.
Vuex 4 is the version that’s made to work with Vue 3.
In this article, we’ll look at how to use Vuex 4 with Vue 3.
Accessing Global Assets in Namespaced Modules
We can access global assets in namespaced modules.
For example, we can write:
<!DOCTYPE html>
<html lang="en">
<head>
<script src="https://unpkg.com/vue@next"></script>
<script src="https://unpkg.com/vuex@4.0.0-beta.4/dist/vuex.global.js"></script>
<title>App</title>
</head>
<body>
<div id="app">
<button @click="this['moduleA/increment']">increment</button>
<p>{{this['moduleA/doubleCount']}}</p>
</div>
<script>
const moduleA = {
state: () => ({
count: 0
}),
mutations: {
increment(state) {
state.count++;
}
},
actions: {
increment({ commit, dispatch }) {
commit("increment");
commit("someAction", null, { root: true });
}
},
getters: {
doubleCount(state) {
return state.count * 2;
}
}
}; const store = new Vuex.Store({
mutations: {
someAction(state) {
console.log("someAction");
}
},
modules: {
moduleA: {
namespaced: true,
...moduleA
}
}
});
const app = Vue.createApp({
methods: {
...Vuex.mapActions(["moduleA/increment"])
},
computed: {
...Vuex.mapGetters(["moduleA/doubleCount"])
}
});
app.use(store);
app.mount("#app");
</script>
</body>
</html>
We have the increment action that commits the someAction mutation from the root namespace.
Therefore, when we dispatch the moduleA/increment action, we should see 'someAction' logged.
We called commit with an object with the root: true property to set make it dispatch the root mutation.
We can do the same with actions. For example, we can write:
<!DOCTYPE html>
<html lang="en">
<head>
<script src="https://unpkg.com/vue@next"></script>
<script src="https://unpkg.com/vuex@4.0.0-beta.4/dist/vuex.global.js"></script>
<title>App</title>
</head>
<body>
<div id="app">
<button @click="this['moduleA/increment']">increment</button>
<p>{{this['moduleA/doubleCount']}}</p>
</div>
<script>
const moduleA = {
state: () => ({
count: 0
}),
mutations: {
increment(state) {
state.count++;
}
},
actions: {
increment({ commit, dispatch }) {
commit("increment");
dispatch("someOtherAction", null, { root: true });
}
},
getters: {
doubleCount(state) {
return state.count * 2;
}
}
}; const store = new Vuex.Store({
mutations: {
someAction(state) {
console.log("someAction");
}
},
actions: {
someOtherAction({ commit }) {
commit("someAction");
}
},
modules: {
moduleA: {
namespaced: true,
...moduleA
}
}
});
const app = Vue.createApp({
methods: {
...Vuex.mapActions(["moduleA/increment"])
},
computed: {
...Vuex.mapGetters(["moduleA/doubleCount"])
}
});
app.use(store);
app.mount("#app");
</script>
</body>
</html>
We called dispatch with an object with the root: true property to set make it dispatch the root action.
We can access root getters with the rootGetters property.
To do that, we write:
<!DOCTYPE html>
<html lang="en">
<head>
<script src="https://unpkg.com/vue@next"></script>
<script src="https://unpkg.com/vuex@4.0.0-beta.4/dist/vuex.global.js"></script>
<title>App</title>
</head>
<body>
<div id="app">
<button @click="this['moduleA/increment']">increment</button>
<p>{{this['moduleA/doubleCount']}}</p>
</div>
<script>
const moduleA = {
state: () => ({
count: 0
}),
mutations: {
increment(state) {
state.count++;
}
},
actions: {
increment({ commit, dispatch, rootGetters }) {
console.log("increment", rootGetters.one);
commit("increment");
}
},
getters: {
doubleCount(state, getters, rootState, rootGetters) {
console.log("doubleCount", rootGetters.one);
return state.count * 2;
}
}
}; const store = new Vuex.Store({
getters: {
one(state) {
return 1;
}
},
modules: {
moduleA: {
namespaced: true,
...moduleA
}
}
});
const app = Vue.createApp({
methods: {
...Vuex.mapActions(["moduleA/increment"])
},
computed: {
...Vuex.mapGetters(["moduleA/doubleCount"])
}
});
app.use(store);
app.mount("#app");
</script>
</body>
</html>
We have a one root getter method in the store’s root.
Then we have the rootGetters property in the object parameter of the increment method.
We can get the getter’s return value with from the rootGetters.one property.
Other getters can get the value from the rootGetters parameter.
Conclusion
We can namespace our store so that we divide our store into smaller chunks that have their own states. | https://medium.com/dev-genius/vuex-4-modules-namespace-e0b4f751119e | ['John Au-Yeung'] | 2020-12-26 21:41:07.366000+00:00 | ['Programming', 'Web Development', 'Technology', 'Software Development', 'JavaScript'] |
1,409 | Building Data Wrangling Zone using ksqlDB, kTable, kStream, and Kafka Connect | What is Data Wrangling?
Data Wrangling is the task of taking and standardizing disorganized or incomplete raw data so that it can be obtained, consolidated and analyzed easily. It also requires mapping from source to destination data fields. For example, data wrangling could target a sector, row, or column in a dataset and execute an action to generate the necessary performance, such as joining, parsing, cleaning, consolidating or filtering.
It helps improve data accessibility by converting it to make it consistent with the end system, as complex and intricate databases can obstruct data analysis and business processes. It has to be transformed and structured according to the specifications of the target system to make data available for the end-processes.
Data Wrangling Use-Cases
Real-time monitoring and real-time analytics
Online Data Integration
Materialized Cache
Streaming ETL pipeline
Event-Driven Microservices
Introducing ksqlDB as a platform component for handling Data Wrangling
KsqlDB is an event streaming database constructed for applications that deal with stream processing.
Events can be transformed in the form of tables (kTable) and streams (kStream).
On these tables/streams, SQL operations are applied to transform or aggregate information and push it into another Kafka topic.
KSQL operates on continuous transformations of queries that run continuously as new data passes through them in Kafka topics with data streams.
For each topic partition processed by a given ksqlDB server, Kafka Streams generate one RocksDB state store instance for aggregates and joins. Each instance of the RocksDB state store has a 50 MB memory overhead for its cache plus the data actually stored.
To prevent I/O operations, Kafka Streams/RocksDB attempts to hold the working set of a state store in memory for aggregates and joins. This takes more memory if there are several keys.
Setup
Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties >> /dev/null &
Kafka
bin/kafka-server-start.sh config/server.properties >> /dev/null &
ksqlDB Server:
ksql_server.list
KSQL_LISTENERS=http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS=KAFKA_BROKER_IP:9092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE=true
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE=true
KSQL_KSQL_CONNECT_WORKER_CONFIG=/connect/connect.properties
KSQL_CONNECT_REST_ADVERTISED_HOST_NAME=PUBLIC_IP_KAFKA_INSTANCE
KSQL_CONNECT_GROUP_ID=ksql-connect-cluster
KSQL_CONNECT_BOOTSTRAP_SERVERS=KAFKA_BROKER_IP:9092
KSQL_CONNECT_KEY_CONVERTER=org.apache.kafka.connect.storage.StringConverter
KSQL_CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
KSQL_CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false
KSQL_CONNECT_CONFIG_STORAGE_TOPIC=ksql-connect-configs
KSQL_CONNECT_OFFSET_STORAGE_TOPIC=ksql-connect-offsets
KSQL_CONNECT_STATUS_STORAGE_TOPIC=ksql-connect-statuses
KSQL_CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1
KSQL_CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1
KSQL_CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1
KSQL_CONNECT_PLUGIN_PATH=/usr/share/kafka/plugins
Server
docker run -it -p 8088:8088 --env-file ./ksql_server.list confluentinc/ksqldb-server:0.13.0
ksql CLI: This is a command-line utility that acts as an interface for the ksqlDB server and allows SQL operations to be executed interactively.
docker run -it confluentinc/ksqldb-cli:0.13.0 ksql http://KSQLDB_SERVER_IP.18:8088
What are kStream and kTable?
kStream
It is a structured but infinite series of events emitting out of a topic.
It is immutable and can be created by specifying the format of incoming events like DELIMITED (CSV), JSON, AVRO, etc.
Create Stream
create stream users_stream (name VARCHAR, countryCode VARCHAR) WITH (KAFKA_TOPIC='USERS', VALUE_FORMAT='JSON');
Select from Stream
select rowtime,* from user_stream emit changes;
kTable
The primary key is mandatory in kTable.
The events in kTable are updatable and can be deleted.
Create Table
create table countrytable (countrycode VARCHAR PRIMARY KEY, countryname VARCHAR) WITH (KAFKA_TOPIC='COUNTRY-CSV',VALUE_FORMAT='DELIMITED');
Select from Table
select * from countrytable where countrycode='GB' emit changes limit 1;
Type of Joins
Stream to Stream
CREATE STREAM s3 AS SELECT s1.c1, s2.c2 FROM s1 JOIN s2 WITHIN 5 MINUTES ON s1.c1 = s2.c1 EMIT CHANGES;
Stream to Table
CREATE STREAM s3 AS SELECT my_stream.c1, my_table.c2 FROM my_stream JOIN my_table ON s1.c1 = s2.c1 EMIT CHANGES;
Table to Table
SELECT M.ID, M.TITLE, M.RELEASE_YEAR, L.ACTOR_NAME FROM MOVIES M INNER JOIN LEAD_ACTOR L ON M.TITLE = L.TITLE EMIT CHANGES LIMIT 3;
Supported Join Combinations
|Name |Type |INNER |LEFT OUTER|FULL OUTER |
|-------------|------------|---------|----------|-------------|
|Stream-Stream|Windowed |Supported|Supported |Supported |
|Table-Table |Non-windowed|Supported|Supported |Supported |
|Stream-Table |Non-windowed|Supported|Supported |Not Supported|
Add-on features
Embedded Kafka connect
ksqlDB connects management takes the responsibility to read from and write to between topic and external data source.
This functionality is very helpful if you don’t want to write glue code to do it.
Download the preferred sink/source connector jar from Connectors.
Copy/Mount the connector jar in docker volume ‘/usr/share/kafka/plugins’, like ‘-v ./confluent-hub-components/debezium-debezium-connector-postgres:/usr/share/kafka/plugins/debezium-postgres’
Embedded connect has pre-installed plugins for Postgres.
CREATE SOURCE/SINK CONNECTOR `jdbc-connector` WITH(
"connector.class"='io.confluent.connect.jdbc.JdbcSourceConnector',
"connection.url"='jdbc:postgresql://localhost:5432/my.db',
"mode"='bulk',
"topic.prefix"='jdbc-',
"table.whitelist"='users',
"key"='username');
UDF (User Defined Functions) and UDAF (User Defined Aggregated Functions)
UDF
Extending ksql using its programming interface and create scalar functions.
For an input parameter return one output.
UDAF
For many inputs, rows return one output.
The state of input rows is preserved and aggregated output is returned.
UDF & UDAF are implemented as custom jars.
Jars copied to ‘ext’ directory of the KSQL server.
@UdfDescription(name = "SIMPLE_INTEREST", description = "Return simple interest calculated")
public class SimpleInterest {
@Udf(description = "Given principal, rate of interest and time return simple interest")
public double simple_interest(final double principal, final double rate, final int time){
...
...
}
}
KSQL Vs SparkSQL
A quick comparison between SparkSQL and KSQL
Final Words
Why did I choose ksqlDB for data wrangling when there are other tools like Spark already present?
Spark has been a battle-tested tool for many years in the field of Data Streaming, especially working with Kafka. But there is a cost associated with it, which has been observed to be bear by every project.
As Spark is natively written in scala, all its latest releases and bug fixes first appear in scala because of Spark being its first-class citizen.
The learning curve of scala is considered high as compared to other supported languages.
Teams may opt for other supported languages like Python, Java, R for Spark development. That will not only implicitly bring trans-compilation delays in building and deploying Spark jobs, but also may experience delays in the release of new features and bug fixes.
Going cloud-native for setting up spark cluster over managed services like Gluu, EMR (Elastic Map Reduce) can be super expensive as they only allow sequential execution of Spark jobs. Parallel execution expects a completely new cluster to be provisioned.
Dependency resolution is one of the other challenges, where teams spend a good amount of time. It gets even more difficult where the dependency management repository is outside the firewall periphery of the organization.
Developers need to focus not only on the core data streaming logic, but also on clean coding practices, unit tests, code coverage, CI/CD, etc.
KSQL (once deployed), only expects developers to understand SQL.
It exposes the RESTful interface to accept updates in streaming queries in SQL format.
Can be deployed on any container orchestration platform with horizontally scaled instances.
Natively supports Kafka, and provides embedded Kafka connect to ship data from/to multiple data sources.
Extensions to UDF (User Defined Functions), to implement complicated transformation logic.
Visit the link to get access to full scripts | https://medium.com/engineered-publicis-sapient/building-data-wrangling-zone-using-ksqldb-ktable-kstream-and-kafka-connect-646b6ef2371c | ['Rajat Nigam'] | 2021-02-19 11:20:29.402000+00:00 | ['Engineering', 'Technology', 'Cloud', 'Data', 'Kafka'] |
1,410 | А. Энхжин: Монгол компаниудын өрсөлдөх чадварыг нэмэгдүүлнэ | Nito is an impact startup by experienced entrepreneurs and IT experts introducing technological innovation for happiness and well-being at work | https://medium.com/@nitotech/%D0%B0-%D1%8D%D0%BD%D1%85%D0%B6%D0%B8%D0%BD-%D0%BC%D0%BE%D0%BD%D0%B3%D0%BE%D0%BB-%D0%BA%D0%BE%D0%BC%D0%BF%D0%B0%D0%BD%D0%B8%D1%83%D0%B4%D1%8B%D0%BD-%D3%A9%D1%80%D1%81%D3%A9%D0%BB%D0%B4%D3%A9%D1%85-%D1%87%D0%B0%D0%B4%D0%B2%D0%B0%D1%80%D1%8B%D0%B3-%D0%BD%D1%8D%D0%BC%D1%8D%D0%B3%D0%B4%D2%AF%D2%AF%D0%BB%D0%BD%D1%8D-d623c06d4f3 | [] | 2020-12-07 14:22:17.912000+00:00 | ['Technology', 'Innovation', 'HR', 'Startup', 'Impact'] |
1,411 | Metal Pens: Good for the Goose and Good for the Gander | Workers rip feathers out of live geese in an 1872 painting. The invention of metal pens displaced demand for quill pens.
By Paul Shapiro
If you were to argue a case before the Supreme Court today, you’d notice something a bit unusual. At your desk, the highest court in the land would have laid out before you pens should you need to write something during the proceedings. What’s unusual about these writing utensils isn’t just, however, that they’re a throwback to a pre-digital era. What’s unusual is that these pens are a throwback to a time when “pen” was largely synonymous with a bird’s feather.
Since its inception in the 18th century until today, the Supreme Court has always given quill pens to the attorneys arguing before it.
For millennia, in fact, quill pens were the norm in much of the world. Parts of the Dead Sea Scrolls (2nd century BCE) were written with quill pens. Both the Magna Carta and Declaration of Independence were written with quills. Thomas Jefferson was such a prolific writer he even bred geese at Monticello for the purpose of having a steady stream of quills with which he could write.
Quill pens weren’t just any feather, though. The only feathers suitable for writing are the birds’ stiff flight feathers, and each bird produces only about a dozen of such feathers per year, with the strongest feathers coming from living birds. As a result, once every 12 months, geese were forced upside down and had their flight feathers torn from their bodies. While this torment was an annual event, the geese had other feathers ripped from their bodies 3–5 times per year for down, with the agonizing experiences ending only after about a decade when they were finally slaughtered for food.
Because each bird produces so few flight feathers, and most people can only use the feathers from the left wing — right-handed people use feathers from the left wing, while the 10% of humans who are southpaws require right wing feathers — vast numbers of geese were needed to satiate humanity’s writing demands, especially as literacy began to climb. While there don’t appear to be good records on how many geese were used for all these pens, apparently “at one point St. Petersburg in Russia was sending 27 million quills a year to the UK.”
Metal Pens: The Golden (or at Least Steel) Goose
The animal welfare concerns associated with live-plucking of birds are obvious. Yes, there were plant-based reed pens available too, but they were considered far inferior by most, leading to the quill’s popularity — until a better alternative arose.
British entrepreneurs pioneered the use of metal pens as early as the 1820s, creating writing utensils that were stronger than quills, retained sharp edges much longer (quills needed sharpening weekly), and didn’t require constant dipping in an inkwell, allowing for faster writing with fewer interruptions of the writer’s thoughts.
In America, the first metal pen factory wasn’t established until 1870 in New Jersey, but it ushered in a revolution that quickly sent the quill pen the way of the carrier pigeon.
According to one historian:
“When the steel pen entered education, a revolution in school practice [occurred]. Writing with the quill had been a slow, unhurried art….[T]he writer had to stop frequently in order to reshape and sharpen the quill….The steel pen changed that. The steel pen made it possible to write continuously over long periods.”
Lessons for Today: Goosebumps for Animal Advocates
It’s seductive to think that humans, when we learn about an abusive industry that we support, will recoil from our ways and change our behavior. Sadly, our species rarely works that way.
As Harvard economist John Kenneth Galbraith put it: “Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.” And that’s just about changing our minds, not even our actions, let alone actions that nearly everyone else is taking too.
Nearly every category of animal exploitation that’s been essentially ended in any particular society has seen its demise not because people made altruistic sacrifices for the purpose of ending cruelty.
It’s common knowledge that whales were largely freed from harpoons not because of sustainability concerns, but because a better alternative — kerosene — was invented. Horses were liberated not by humane sentiment, but by cars. We don’t exploit carrier pigeons any more not because people cared about pigeons, but because telecommunications rendered their use obsolete.
Usually if an exploitative industry has largely ended, it’s been displaced by a clearly superior animal-free alternative. The only exceptions I can think of to this are with veal and foie gras, which many people do avoid eating for animal welfare reasons, though very few people have ever eaten either, given how expensive they are. (These industries still exist, of course, but are and always have been relatively tiny.)
The point isn’t to suggest that animal advocates shouldn’t make ethical arguments and expose how poorly animals are often treated. Rather, the point is that moral awareness is virtually never sufficient to end an abusive industry. Humans need clearly superior alternatives to warrant switching away from animal use. Not the equivalent of reed pens, but something so much better, like metal pens, that using animals for that purpose would seem as abnormal as lighting a room with whale oil, riding a stagecoach to work, or sending our messages via pigeon.
In other words, yes, animals need humane sentiment, but they also desperately need inventors and entrepreneurs who can pioneer new products that will simply render their use as obsolete as a quill pen on a Supreme Court desk.
Paul Shapiro is the author of the national bestseller Clean Meat: How Growing Meat Without Animals Will Revolutionize Dinner and the World, the CEO of The Better Meat Co., a four-time TEDx speaker, and the host of the Business for Good Podcast. | https://paulshapiro.medium.com/metal-pens-good-for-the-goose-and-good-for-the-gander-6b563eaa7606 | ['Paul Shapiro'] | 2020-12-03 16:25:56.868000+00:00 | ['Animal Rights', 'History', 'Animal Welfare', 'History Of Technology', 'Social Change'] |
1,412 | A SINGLE SCREEN REPLACED BOOKS, PENS FROM YOUR HAND! | What a tragic reality we’ve got reached today! One screen isn’t solely restricted to the whole subjects on this planet however conjointly its corresponding books, and even lecturers. we could appreciate the dynamical technology and find ourselves familiar with that or we could simply conserve our authentic Gurukul System?
This journal throws light-weight on the rising and reworking learning ways in our day to day lives.
EARLY LEARNING METHODS
GURUKUL SYSTEM
Let’s begin with Gurukul System.
The students of Gurukul are more disciplined and organized. They are taught to follow a well- planned schedule in school. The students are more focused and possess more concentration power than normal students. This is because they are trained through techniques such as meditation which enhances their focusing power.
However, the old learning method was all about recitation, for example, students would sit in silence, while one student after another would take it in turns to recite the lesson until each one had been called upon.
But the best part about the traditional learning method is “DISCIPLINE” and “GRATITUDE TOWARDS TEACHERS.”
EVOLUTION OF EDUCATION AND LEARNING WAYS
In India, the education system has numerous aspects and it’s evolved since precedent days and color. As the Republic of India progressed and got its independence from the British Colonies, the modern education system step by step evolved. Presently within the Indian establishment has four levels pre-primary, primary, secondary, and better secondary.
Photo by Clay Banks on Unsplash
This was the theoretical explanation of the evolution of Learning Methods but the practical evolution of learning methods changed the educational perspective of the world.
Traveling from the “forced to attend” corporate training programs, we have now reached the time to learn with flexibility. While revamping, the accelerated learning techniques along with the apt technology bring refreshing aura in the learning segment.
Is this the bandwagon? Oh no, this is the basic nature of the environment — change.
It’s time to buckle up for the new-age corporate learning!
1.INDIVIDUAL TO COLLABORATIVE LEARNING: In collective learning, two or more people learn or attempt to learn something together. Courses serve the best when it involves incorporated problem-based discussions, reflection, and other ways to make the participants an active part of the learning process. Collaboration is no longer considered as a nice add-on; it has become a necessary feature. People are learning, not simply with others, but from the shared experiences and ideas of others.
2.PASSIVE TO ACTIVE LEARNING: The learners scruple to be solely content-receptors, just taking down notes or paying attention to trainers speak for hours without pause. However, Active Learning aka Brain-Based Learning could be a participant cantered learning method that’s contrary to trainer cantered passive learning.
3.COMPULSORY TO CONDUCIVE LEARNING ENVIRONMENT: Favourable learning environment should be available not by chance but through well designing and planning. Several interventions like ‘ICE BREAKERS’, interactions, and psychological stimulations may be part of each training program. These interventions not only create a favorable learning environment but also make a participant understand the benefit of the training program. Participants must find a purpose of learning.
4.SINGLE MODE TO BLENDED LEARNING TECHNIQUES:
Studies have shown that using different types of learning techniques improves an individual’s engagement with the topic, retention of information, and overall satisfaction. Offering a variety of content, delivery methods, and further resources make the learning experience richer and ensure the best possible outcome.
Some of the basic examples are face-to-face training programs with e-learning modules, action learning sets, or discussion forums. Emerging new modes like e-learning, mobile learning, and gaming are adding flavors to blended learning techniques.
Fun in Learning
5.PROLONGED DURATION TRAINING TO BIT SIZE LEARNING :
Bearing in mind the short span of human attention, the bit-size learning approach is reshaping the learning behavior pattern. The chunking method helps to grasp information in short periods and accordingly learning is applied successfully in the workplace. Such crisp and apt learning motivate me to focus on the key concepts and techniques.
6.STANDARDISED COURSE TO BESPOKE PROGRAMMES: There is an earnest need to design each in-organization course suitable to the workforce, keeping one’s learning objective in view. On the contrary, the standardized courses have their limitations — many times the participants do not find relevance to the course content and their learning objectives.
7. MONOLOGUE TO INTERACTIVE LEARNING RESOURCES: Gone are the days when the print out of the slides was used as participant reference material. Each slide had a lot of information and was enough to create fear psychosis to the participant. In addition to the core face-to-face delivery, providing other resources like Participant Manuals, FAQ Guides, hints, tips, and fact sheets to support the learning may prove beneficial. Interactive Learning is a pedagogical approach that incorporates social networking and urban computing into course design and delivery. Interactive Learning has evolved out of the hyper-growth in the use of digital technology and virtual communication, particularly by students.
8.PROLONGED DURATION TRAINING TO BIT SIZE LEARNING: Bearing in mind the short span of human attention, the bit-size learning approach is reshaping the learning behavior pattern. The chunking method helps to grasp information in short periods and accordingly learning is applied successfully in the workplace. Such crisp and apt learning motivate me to focus on the key concepts and techniques.
LEARNING IS CHANGING!
The manner during which ancient strategies were tutored ensured that students were rewarded for his or her efforts, used category periods with efficiency, and exercised clear rules to manage students’ behavior. They were based on established customs that had been used successfully in schools over many years. The teachers communicated the knowledge and enforced standards of behavior. Whereas Progressive educational practices focus more on the individual student’s needs rather than assuming all students are at the same level of understanding. The modern way of teaching is more activity-based, using questioning, explaining, demonstration, and collaboration techniques.
Photo by NASA on Unsplash
“The traditional “chalk and talk” method of teaching that’s persisted for hundreds of years is now acquiring inferior results when compared with the more modern and revolutionary teaching methods that are available for use in schools today. Greater student interaction is encouraged, the boundaries of authority are being broken down, and a focus on enjoyment over grades is emphasised.”
says Sania Jackson.
Applying what you’ve got learned is wherever 80 percent of the education takes place. This involves victimization of the talents and information at intervals your work atmosphere that produces the educational stick, inflicting a behavior amendment that produces desired results. … Since learning is dynamical behavior, you may encounter resistance.
Increased and video game trends in education technology create learning compelling expertise. Whereas increased reality provides associate degree increased read of a true image, video game provides a false perception of reality around them. Each of these techniques has taken digital learning to new dimensions. One of the largest blessings of technology is the ability for college kids to be told at their own pace. Some students can go with new ideas whereas others would like longer to assimilate knowledge. Brighter students will go to the ensuing stage whereas others will use different learning strategies.
Technology may be accustomed to improve teaching and learning and facilitate our students to achieve success. However, technology may be a “force multiplier” for the teacher. Rather than the teacher being the sole supply of facilities in a very room, students will access internet sites, online tutorials, and a lot more to help them.
Education technology will create learning more interactive and collaborative and this could facilitate students’ interaction with course material. Instead of memorizing facts, they learn by doing. For a few students, interactivity provides a stronger learning expertise
EVEN COVID-19 IS REVOLUTIONALIZING THE NEW AGE LEARNING INDUSTRY.
Photo by Bruno Barreto on Unsplash
The coronavirus pandemic has caused Asian nations and alternative nations around the world to rush into remote learning. This abrupt shift can have a large impact on teaching and learning long when the COVID-19 crisis ends. Picker, a leading legal scholar at the University of Chicago school of law says that the technology and infrastructure for remote learning are building within us and alternative nations over the last decade, creating the large push on-line doable. This huge shift is leading to experimenting on a world scale whereas underscoring a digital divide supported financial gain and site that has long existed, says Picker. Remote learning may be a powerful tool from grade school to skilled education categories, and whereas Picker says it doesn’t replace the room, it shrinks distances and supports teaching in new and fascinating ways in which for instance, invitatory a guest speaker from Europe maybe a few clicks away instead of requiring travel.
Why the World is Betting on Online Education During the Pandemic
There are several advantages of turning to online education — even beyond the fact that the model is synonymous with social distancing. These include:
Flexibility: It’s much more convenient for college students and academics to embrace online learning. A stable web association and a pc square measure all that’s needed to show your home into a room, and with today’s high-speed web connections at cheap costs, this setup is simpler than ever.
Accessibility: Before the internment, thousands of migrant students came back to their hometowns to keep|to remain} home and stay safe with their families. In such a state of affairs, on-line learning becomes a lifesaver. With E-Learning, students will access instructional opportunities that may not be out there to them otherwise.
E-Learning amidst the Pandemic
Range of specializations: For college students wanting to reinforce their skilled skills through on-line learning, the web may be a treasure of relevant courses, with thousands of hours of helpful content that may bolster their data and ability base. there’s no dearth of choices out there, and students will take hold of their career flight by selecting the one that suits them the simplest.
Cost-effectiveness: Online learning is way a lot of economical than ancient learning ways. Online degrees value virtually a tenth of their offline counterparts. This can be an important side in today’s economy, wherever job losses and pay cuts square measure rampant. Students will rest assured that the economic blows dealt by the pandemic won’t affect their education and, in turn, the longer term of their careers.
The traditional learning method rote learning followed by a degree is currently virtually rendered. Today’s students perceive that active learning and relevant sensible business coaching square measures are much more valuable in building a triple-crown career. As a result, the will to find out from business professionals is booming. Students believe that such experiences can facilitate them employment a lot of effectively than in an exceeding room with archaic lessons.
Now that online education is the sole viable possibility, a revolution is on the horizon. Several students United Nations agency embrace e-learning throughout these tough circumstances could stick with it even once the pandemic has passed. Online learning is an Associate in Nursing intrinsic a part of the new tradition. It’s a unique methodology of learning that has been steadily gaining momentum for a short time, and by the appearance of it, is here to remain.
CONCLUSION
Technology is an ever-changing platform. We can’t deny it cause we all are benefitted with it and more or less it has turned our necessity too. Traditional techniques used repetition and memorization of information to educate students, it meant that they were not developing their critical thinking, problem-solving, and decision-making skills. Modern learning encourages students to collaborate and therefore be more productive. Saying that traditional and modern teaching methods are both effective and useful in today’s education. Sarah Wright, who blogs for TES, explains:
“As with most things, it’s all about balance. We need to understand when a traditional method works best and when it’s right to try new and innovative approaches.”
Digital Learning
The best thanks to concluding this discussion is that at the side of adapting and evolving ourselves with the newer learning strategies we’ll keep practicing the principles and disciplines of the standard learning strategies which will be beneficial.
WE ARE HERE BRIDGE ROUTE BETWEEN THE LEARNING METHODS!
We, at Ootsuk, ignite the sleeping curiosity of children from grade 1 to grade 12 and help them to learn the art of questioning. Universe has all the answers you need but what you need to do is to ask the right questions. And these right questions will get you your dream job. To know more, download the Ootsuk android app — Ootsuk on Play Store or Visit the website www.ootsuk.com | https://medium.com/bemoreootsuk/have-you-ever-wondered-a-single-screen-would-replace-books-pens-from-your-hand-and-even-teachers-52204e59a324 | ['Satabdi Mohanty'] | 2020-07-24 17:09:04.267000+00:00 | ['Edtech', 'India', 'Education', 'Technology', 'K12 Education'] |
1,413 | Improving The World With Technology | The year is 2020. We are sending cars into space, talking to foreigners without knowing their language, sending information to over 100 locations per minute, and seeing pictures of a city that isn’t even close to where we are. Ever since the first era of revolution in the 1800s, all the way until the fourth era -which is today- we have come up with brilliant inventions and brought human kind closer to the unimaginable than ever. It’s time to utilize our capabilities and knowledge to something of higher standard, but this time to take a look inside ourselves. To improve our own conditions and change the world for the better with the help of technology and motivation inside our very own human mind.
“Be the change that you want to see in the world” — Mahatma Gandhi
What are the challenges of the world at the moment?
Our planet is a blessing. Astronauts and scientists describe the Earth as one strange rock, yet a beautiful miracle that have appeared in our solar system. At the same time, the controversial topic about climate change and gas emissions is a topic many find provoking. Many ask themselves whether it’s the development and the evolution of technology that has caused this today. We’re talking excessive use of electronics, careless use of electrical energy, factories with massive gas emissions, or even just buying plastic. At the same time we live in a world where social media controls our attention, thoughts, and drives our curiosity. We can’t even vote on our favored political party without being impacted in one way or another from algorithms that roam the social media platforms. These are just a fragment of all the things that we have great potential in improving at as a humanity, but where do we start?
What I care for is for a world for anyone to be able to live to their best potential, and with technology in hand to help. When it comes to plastic, gas emissions, excessive use of energy and electronics we’ve gotten great improvements, but we’re not just there yet. There are also great advancements in the security of pedestrians when it comes to machine learning algorithms to prevent accidents. In addition to traffic, we also have Artificial Intelligence on our side to help the medical sector with issues such as detecting early cancer sells. I believe in a world where the human brain and technology can meet halfway to shake hands and work to achieve great results- where we can improve data and information to a world of possibilities and positivity. I believe that by recognizing the challenges and obstacles, we can find countermeasures and attempt to improve the conditions of the people and the world- by anything small and by anything big.
The online and the offline behavior censor protocol
Now I’m sure this sounds quite weird, but let me explain. We also need to take care of our network as humans, and to spread positivity around us. For example, when parents think of social media many associate it with bullying.
We have already built a foundation of improvements when it comes to the current social challenges we face. We have an online and an offline behavior censor protocol, as I like to call it. The online behavior censor protocol makes sure that information that passes online is distributed in a way where hate and insecure acts are eliminated. I’m not talking about some sort of high-tech program. I’m simply talking about the process of making sure that those who try to bully/assault in any sort of way, gets blocked, deleted, removed or stopped in any sort of way. We use this as one of the only temporary countermeasures against online harassment.
The offline behavior censor protocol acts as a real-life process of stopping those harassment and attacks on individuals, other objects or topics. We have participated in stepping in, speaking up and backing others up when it has been needed. We have been the brains behind the procedures of censoring away unneeded comments and blocking others out of our life or helping others. My belief is that we need to mitigate the possibility of negativity existing in the first place. Not by fighting back- but what I’m talking about is something that takes place in much earlier stages- By teaching people early on to act forgiveness, kindness and to teach about emotions and many other countermeasures. Why are we fighting fire with fire when we can educate correctly?
Technology, intelligence, children and the future
Information Technology is no longer a stranger to us, and isn’t the type of terminator movie as you saw in the 80s. Information Technology is our right hand in almost every subject, whether it is learning, helping, removing, printing, communicating, speaking, etc. You think it, you name it. For that reason, Information Technology is one of our biggest sources of help that we utilize for our own good in our daily life. In my opinion, I believe people should be taught Information Technology so that they too can see this potential. The more you know about the possibilities of the future, the more you can use the world for your advantage. There are already many kids who do this. Kids who learn Information Technology and earn their own money to support themselves, only with the help of knowledge and experience. They also live a simpler life as technology help them in many ways to increase the efficiency of everything they do, whether it is writing, doing work, planning a daily schedule or to secure their life in almost every way without having to monitor every task with cyber security.
I imagine a future where teenagers and kids can help themselves improve and get to know themselves better, to learn about emotions and to learn about their own mind. A place where we can understand our own personal problems with the use of technology and deep neural networks to register our personal information such as our intelligence and personality traits. This way we are our own change to the world, by understanding our own emotions and how to manage them. Perhaps it may sound very artificial, but this is not a strange phenomenon for some, as people are already developing and creating similar programs and processes. However, there shouldn’t only be a focus on the mental improvement. I mentioned earlier about enabling children and to give them the opportunity to live a life, not having to depend on their “start” chapter or other obstacles they may come across that aren’t their own fault. Such as increasing help for those who have difficulty in learning by giving them a custom-built plan and using the correct tools to heighten the disabled features of some people to give them a kick start where ever they want to go. I understand that this is what we’re already doing, but by using technology and machine learning we can learn how to increase the efficiency and the specifics of details. We will have a better outcome while at the same time do what is custom and best for us. By giving people equity and enabling understanding between each other, I think we are already eliminating many real-life problems such as misunderstandings, mistreatment and other possible situations that could escalate to something more major. People will understand that it’s not about helping others, it’s about starting to understand yourself and improve your own journey before you start understanding and improving others. | https://medium.com/@ergoetzy/improving-the-world-with-technology-5677c91b2d67 | [] | 2020-12-05 19:02:19.943000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Information Technology', 'Humanity', 'Future Technology'] |
1,414 | Learn About Headless Commerce & Microservices | Talon.One | One of the biggest changes that’s taken place in the world of business/ecommerce software is a move towards headless, microservices-based platforms — driven by consumers’ desire for personalized experiences and tailored content.
Monolithic systems are increasingly unable to provide the level of flexibility and service customers and businesses require.
Our new ebook takes a look at headless commerce & microservices, and why they’re becoming an increasingly popular choice for businesses in all industries.
Here’s a quick preview of some of the things you’ll learn in the ebook — Headless Commerce & Microservices Explained.
What is headless commerce?
The core principle behind headless commerce is headless software. You’ve probably heard the term, but many people are still unsure what headless software actually is.
Ultimately, headless software is backend software (software responsible for non-user-facing functions) that operates without a frontend (the software users interact with).
This is the opposite of traditional monolithic systems, where the frontend and backend are coupled together.
When backend and frontend software functions are decoupled, the platform becomes much more flexible, scalable, and versatile.
Benefits of headless commerce and microservices
There are many reasons why headless software and microservices have become the go-to choice for ecommerce:
Headless commerce allows businesses to create their own custom software stacks using many different microservices
Headless commerce provides much greater flexibility for new ecommerce channels
Headless commerce saves on developer time and reduces the risk of new changes disrupting the rest of the system
Ultimately, ‘best of breed’ microservices represent the next generation of ecommerce software platforms. Building upon the headless software model, best of breed microservices each fill their own niche within the wider ecommerce software stack.
They can be compared to monolithic software systems as follows:
Microservices are especially suited to large enterprise level businesses because they’re modular, scalable, and packed with features.
This means that businesses get much more for their money, and they don’t have to worry about upgrading or changing their system as their demands grow.
Ultimately, because microservices each serve their own specific niche, they tend to provide greater functionality than monolithic software systems can.
You can find out more about headless commerce and microservices, including tips on how to choose the right ones for your business, in our ebook. | https://medium.com/@talonone/learn-about-headless-commerce-microservices-talon-one-aebd8e3bf0a5 | [] | 2021-06-17 08:29:55.122000+00:00 | ['Microservices', 'Headless Commerce', 'Retail Technology', 'Promotions'] |
1,415 | Classic Facebook is going away soon | (Image: Techtsp.com)
According to Techtsp, Facebook is offering a new prompt saying that the classic Facebook design is going away next month, encouraging users to switch to the ‘new Facebook.com.’ In May, Facebook made the new optional desktop design available for everyone. And now, Facebook intends to make it the default desktop design. As a result, users have no choice but to switch to a new desktop design sometime next month.
Also Read | Google Chrome making it easier to edit offline files and documents
“We’ve made improvements to the new Facebook.com and we’re excited for everyone to experience the new look. Before we make the classic Facebook unavailable in September, we hope that you’ll let us know how we can continue to make Facebook better for everyone.”
Also Read | Google Chrome 86 finally brings Native Windows 10 sharing behind flag
The new Facebook design supports dark mode update and promises faster and more streamlined navigation to help users discover videos, games, and Groups more easily. | https://medium.com/@techtsp/classic-facebook-is-going-away-soon-b9363feea5d8 | ['Tanmay Patange'] | 2020-08-20 08:05:05.913000+00:00 | ['Technology', 'Technology News', 'Social Media', 'Tech', 'Facebook'] |
1,416 | WE AND SILICON CHIPS:- | Integrated circuits.
Well known to every one that now and the upcoming time is of technology, artificial intelligence, internet of things, data science, etc. today any information from around the world is at tip of pour hands, just a click and you are served. Well it’s known that there are a lot of factors compromising the zenith attained by mankind in this sphere, the one today’s article is about CHIP OFTEN KNOWN AS SILICON CHIP, 14th element in the periodic table, the most abundant, preferred semiconductor.
What’s the most astonishing thing about chips nowadays is the size or to be more precise “the shrinking size”.
Technology, full of wondrous antiquity, laid it foundation stone in year 1946 with the invention of ENIAC, words first programmable electronic computer, developed by John Mauchly and John Presper Eckert, weighted 2700 tones, area 1800 sq-ft or precisely a 3BHK apartment, with electrical supply of 150kW( 300 mixer grinders at a time).
And today’s electronically derived computers weight (eg iphone 7 weighs 138gm), storage area required is one hand utmost, and electricity at a time is 2–6W depending upon the technical infrastructure. This enormous amount of versatility , energy power have somewhat been attained because of the the shrunken size of chips in these 5 decades from size of 10 microns to 0.005microns. It’s obvious that smartphones would have been handy with 10micrometre chip and not even efficient. Eact consecutive generation of computer chips is insuring more versatility, efficiency, reliablity.
HERE’S A QUICK LOOK :-
1971 — intel40004- 10 micrometer
1993- intel pentium — 0.8 micrometer
2007- intel core 2 duo — 0.065 micrometer
2013- snapdragon 800- 0.028 micrometer
2017 — apple A10X — 0.01 micrometer
2018 — apple A12X — 0.001 micrometer
2020 — apple A14 0.005 micrometer
2023* — TSMC — 0.003micrometer
2029* — 0.0014micrometer.
apple’s move 14nm A10 to 7nm A11 yielded a 25% increase in efficiency as per the reports , and lately, BBC reports says 5nm chips are expected to be 15% faster then 7nm chips while using the same power even working with higher speed as compared. This all is possible because of millions of transistors embedded in it. And this advancement would go and go.
AND a latest stuff about apple M1 silicon chip of 5 nm size with 16 billion transistors embedded. | https://medium.com/@abha975675/we-and-silicon-chips-e49f251a6daa | ['Abha Negi'] | 2020-11-26 14:19:12.722000+00:00 | ['AI', 'Chips', 'Silicon Valley', 'Technology'] |
1,417 | How To Justify The Valuation of Tesla Stock? | Photo by Austin Ramsey on Unsplash
That’s it. Tesla is now part of the S&P 500 index, achieving the status of a safe bet and representing one of the first weightings of this prestigious index, whose members include Apple or Amazon.
Many financial analysts have long been skeptical of Tesla’s valuation. Elon Musk himself tweets, in May, the stock price is too high.
But it looks like the pandemic has not had an effect on Tesla which signs a very convincing third quarter. The stock price is only going up. So what’s happen? What can we find in the leader of the electric car?
Market Value
Why is difficult today is to differentiate the quality of the company from the stock price? We must therefore analyze Tesla as a stock market valuation and not as a company. Tesla has a technological advance in electric cars. We all agree.
Tesla’s turnaround in fortunes this year came after the company managed to report profits for five consecutive quarters.
But for a market valuation, the central question is whether Tesla is really worth that much. As of December 2020, Tesla has a market cap of $658 billion. It makes Tesla the world’s 9th most valuable company by market cap. More than General Motors, BMW, Volkswagen, and Chrysler… Can Tesla reach a market cap of $ 1 trillion?
As Tesla’s loyal investors reap the rewards of their faith in Musk’s vision, the big question for 2021 is how to trade this stock in the future. The analyst community is divided. Is Tesla a bubble or the new El Dorado?
A major challenge for investors and analysts is how to justify further gains for a stock that is trading well above high valuations. For this to be justified, it is not enough just strong growth. This is already the case but it is not enough. It takes more than that. In fact, the market is already betting on the fact that it will become number one in the world as it has done with Amazon in the past. This helps to understand what is called a high valuation on the stock market.
What is the motivation of buyers to buy so high? What are they based on?
For instance, Piper Sandler recently raises Tesla’s price target to $ 2,300. Market studies notice competitive advantage, a strong balance sheet, and finally the ability to generate cash for the company of Elon Musk.
Tesla’s cash position is improving but in these three points what I am really interesting to analyze is one aspect of the competitive advantage. How is tesla different from others? So how can Tesla get ahead of it never to be caught?
Of course, we could focus on all material innovations such as batteries to defend Tesla’s lead over others. But technological innovation at Tesla is quite different, it could well be software.
Tesla’s innovation strategy, which focuses on transforming the automotive industry as a whole, demonstrates, in particular, the validation of hypotheses and the implementation of new technologies in the market.
Tesla’s hardware architecture is an assembly of batteries at the base, two electric motors, no transmission… All of this also gives it an advantage over competing electric vehicles built on traditional vehicle architectures.
What makes this part of the strategy truly unique is not only the fact that Tesla produces electric vehicles, but also that it introduced a vision for assembling them. These are new hardware and software architecture. A Tesla has more software than the average vehicle and is integrated around a single central software architecture. Tesla has a great ability to update their software and optimize vehicle performance thanks to this very unique architecture that most traditional cars, which also have the software, find it hard to emulate.
Full Self-Driving (FSD)
Autonomous driving is Tesla’s biggest opportunity. Much of Tesla’s earnings call was devoted to discussing the company’s efforts to achieve full self-driving capability (FSD). Let’s dig a little deeper into this technology.
Tesla’s vehicle software, which currently enables the FSD capability, has improved over time. Its release followed a “fundamental rewrite” of the FSD (for “full self-driving”) software, and which now combines imagery gathered from all eight cameras of a Tesla vehicle into a series of linked images, as opposed to the image after image.
New steps for the autonomous car? On October 21, Tesla released the beta version of its FSD update.
In August, Musk promised a “quantum leap” as to the capabilities of this option at $ 8,000, explaining that the software had not been simply improved but completely rewritten by his teams. Explaining himself to use the alpha version of the FSD mode between his home and work, he promised in particular that the cars with them were able to perceive their environment in “four dimensions” and not in two.
This improvement of the FSD allows cars to drive alone on small roads or in town, and no longer only on freeways or expressways. Only a few customers have been chosen by the brand to test this advanced mode, which is not yet available to the general public. Drivers should keep their hands on the wheel and be extra vigilant because the vehicle can do the wrong thing at the wrong time.
Thanks to many kilometers of data gathered by enthusiastic beta testers and feedback to Tesla, FSD appears to be getting rapidly better.
Tesla Autopilot FSD Oracle Park to Crissy Field at Night 1x Raw Footage — Whole Mars Catalog
When activated, the digital instrument cluster takes on new graphics: vehicles and obstacles are framed, along with traffic signs. This visually represents the observations of the software.
The autopilot was reserved for highways. Another system was used for parking assistance and low-speed maneuvers. Until now, urban speeds were a problem for engineers. Testing in the real world is key to Tesla’s strategy to expose the neural net and algorithms that underpin the self-driving software to “edge cases”, that would not otherwise occur in a simulated environment.
The YouTuber Dr. Know-it-all Knows it all shared his thoughts on FSD and describes the system that allows objects to be identified through an image-based neural network and then find the object in the image database. Semantically, recognizing a cat does not mean that the computer understands what a cat is. It is simply the assembly of edges, pixels, and colors that is associated with a label which it knows is called “cat”.
Trajectory prediction and 4D data continuity — Dr. Know-it-all Knows it all
This affects the driving because the computer only detects what a picture contains in relation to a tag, it doesn’t give the computer any more information about what a cat actually is. It doesn’t give any idea of how a cat behaves.
In order to drive effectively, the computer must know the difference between the images and what they represent as well as being able to anticipate what will be in the next image. This is what Dr. Know-it-all called “continuity”. If the computer detects what is in one image, the computer should be able to tell that it is in the next image, in four dimensions over time. This is why it is the “4D”, he noted. Time, or continuity, is the 4th dimension. A dimension where the algorithm would be able to predict variations in speed or even changes in direction of objects near the vehicle.
We need to move forward in this direction and optimize the semantics of the computer in order to avoid very rudimentary driving errors and achieve the sci-fi autonomous driving that we expect, with the car driving itself. For this to happen, we need data continuity over a long sequence of images or videos and a temporarily semantic understanding of what the car sees. This is where 4D training comes in. He noted that Tesla’s FSD looked at eight individual cameras as well as radar and sonar about every 30 seconds. It is very fast treatment. The FSD has treated all of this information separately. It identified the objects and then he acted on them.
Tesla’s Competitive Advantage
The new beta of FSD with 4D capabilities uses the Tesla inference engine and the chips Tesla has in its vehicles. The eight cameras are now linked together in a single view, and it’s a video sequence instead of individual images. Along with this, there is object detection which has more information.
Besides the likelihood of fewer errors in semantic identification over time. What this 4D training opens the door to is trajectory projection. This would require the computer to have a large amount of knowledge because it has to understand what these objects are. The computer now knows that it must take immediate action to avoid this object which can move on its own or, at the very least, that it must break quickly to avoid a collision with the object, or use different calculations to avoid hitting it.
This type of trajectory projection is what humans do extremely well, at least if we are paying attention while driving. Existing software like Waymo and GM Cruise exactly map the environments around them and can track those things. They can sense objects in the way and brake if they need to, but they lack the understanding of Tesla’s new FSD software.
Not only are there a variety of objects in the world that can get in the way of the car, but the computer must also determine if these objects are moving, standing still, how dangerous they are, and then be able to respond. Along with this, the computer must also determine if things are far enough away.
The software security of such systems seems far from certain. A hacker, known as Green, discovered also the actual system settings of the beta version of the brand’s full self-driving feature. The system has dozens of parameters and settings. But what stands out most is that it is possible to make a detailed view of the vehicle appear in the world while a Tesla is in motion. One can see the internal states of the system, which show the dozens of parameters including controls for FSD, camera information, and ultrasound. Another hacker unlocks the “augmented vision” mode used to understand what the car is seeing and build confidence in the system.
There was a time when manufacturing seemed to be Tesla’s weakness. Manufacturing issues affected the initial production ramp-ups of the S, X, and 3 models. But Tesla’s Model 3 launch at its new factory in China and the company’s new Model Y at its California plant came about not only ran smoothly but also at faster rates than previous launches.
The promise of the new FSD is beautiful and the timing of the competition imposes urgency. Tesla claimed the possibility of 30% gross margins if more customers choose to purchase the company’s FSD software.
Dominating the industry given its size, structure, and politics is a challenge for Tesla. However, the multiple challenges facing the industry’s economic model give it a competitive advantage.
Tesla investors must still be patient. Many criticisms are indeed directed at the manufacturer because of its communication regarding the automation of driving.
Tesla is valued as a technological innovation. Tesla’s current valuation does not imply that it is only an auto company. Tesla buyers and investors are betting that, like Apple, the company will soon have a series of services built into its hardware that will make Tesla more than just a car maker.
Summary | https://medium.com/@daniel-leivas/how-to-justify-the-valuation-of-tesla-stock-50be668afc13 | ['Daniel Leivas'] | 2020-12-22 22:13:42.632000+00:00 | ['Self Driving Cars', 'Technology', 'Tesla', 'Electric Car', 'Stock Market'] |
1,418 | Big Data Analytics: I.R. Smart Nation and I.R. National Collective Brain (NCB) IoT integrations | Big Data Analytics: I.R. Smart Nation and I.R. National Collective Brain (NCB) IoT integrations
I.R. Smart Nation is a potentiate National Defense Data Security Protection and National Informatics and Information Surveillance and it include the entire project of Public and Private Users Devices Security. The argument in particular is the I.R. National Collective Brain (NCB) which is the most sophisticated computational neuroscience political analytics, and the most advanced IoT integration to the smart nation and the Defense Data Security initiative to power the national standard users mind quality using Score Feedback and Equality Experience. Dr Daric Erminote Dec 7, 2020·3 min read
Ice Regime of North Pole Smart Nation is one of the most important initiative to protect the Dictatorial statement of Political and Military Regime and the National Security Framework of the Country Project and it’s the extra luxury warranty to offer to the country users the computer science quality standards to speed up business productivity and extra luxury lifestyle including personal life experience.
What is National Collective Brain (NCB)
The National Collective Brain (NCB) is data intelligence project which should be managed by the Dictator and the Secretariat of the Dictatorship managed by the Secretary of State and Minister Executives and over 37.050 Secret Officers to warrant the projects of National Surveillance, the Defense Data Security, the entire ministerial data analytics and information security, and the executive council of the businesses and residential users device data security and communications systems and services.
The most interesting National Collective Brain (NCB) feature has also the IoT integration to the digital identity protocol of each users registered in the nation and many information and marketing project must be create to give to the users possibilities and recommendation to arrange the properly environment to join working experience and recreational activities to warrant an excellent lifestyle personalized in order to avoid agony and nostalgia or terrible experience approaching to users and environments where the users behavior isn’t appropriate meeting each other and the main interest is to offer a free education in levels dashboard to join electronic studies of fields, the integration to the prepaid job seeking dashboard which the review is strictly necessary to arrange trust score on working experience information, a personal cloud feedback which users could use to review the approach experience including some restrictive social media integrations and professional dashboards features. The console include also medical data information, sport clubs data information, financial information, and legal and crime conducts.
Why Ice Regime of North Pole is more efficient than a regular first World Democratic Nation
Ice Regime of North Pole is a financialism movement which is not accepting diversity freedoms and nature of ignorance, a National Initiative which is always powering new technologies and processes of lifestyles of the human sociality, healthcare, mentality, education and professionalism acceleration is a must to warrant an excellent opportunity to feel successful with life careers and have an excellent environment of human society without disturbs or miserable living environments which is always slowing down projects or putting in danger lives, the goods, properties or personal data information and without suffering by terrible diversification and financial terrorisms.
In Conclusion
Foods are very important to live and the Geographical North Pole has limited fishes and seafoods. It matters whether the essentials eat healthy food. The project agree that healthy life style is so important to live a great life because health is very important to do any thing. If the users who is not healthy then the users cannot do good things.
Because of Financial, Medical and Sport Healthcare Information development require acceleration of lifestyle habits can help to reduce risks for heart attack especially in submarine cities. The main purpose of convenience foods is to save us time and work in international world markets and trades. Particular decisions must be taken in the smart nation applications and national collective brain to agree to be satisfied with their questionable quality and safety of the sectors and much more. | https://medium.com/@daric-erminote/big-data-analytics-i-r-smart-nation-and-i-r-national-collective-brain-ncb-iot-integrations-828e5c4f9cc6 | ['Dr Daric Erminote'] | 2020-12-07 21:13:25.756000+00:00 | ['Big Data Analytics', 'Politics', 'Information Technology', 'Big Data', 'Military'] |
1,419 | This Week in Good Reads | Hey Absurdists, here are some stories that caught my interest. First, a quick factoid on our namesake Absurdism, it isn’t actually about extreme comedy but rooted in a philosophical movement.
“Be a complete light to yourself. I realize that so I don’t follow anybody or any worship, any ritual & yet the eternal eludes me.” — Jiddu Krishnamurti
Put another way, we often ask, “why are some opinions or solutions more valid than others?” At Absurdist, we’re shaping our inquiries around four areas, Culture, Policy, Nature & Technology and we hope you find it enlightening. | https://medium.com/endless/this-week-in-good-reads-3aa3d16cc34e | [] | 2015-07-14 19:21:50.028000+00:00 | ['Social Justice', 'Feminism', 'Technology'] |
1,420 | Role Of QA In Banking Digital Transformation | Advancing digital technologies are changing the way industries function; industries are getting the needed thrive from digital technologies. In order to meet the expectations of digitally equipped customers, every industry is relying on Quality Assurance (QA). Similar goes for the banking sectors, consumers want the best of their services, and digital transformation in banking is picking up the pace. Banking digital transformation will result in increased data transparency, removal of intermediates in the process, and fast and secure methods to access financial and intellectual data. Banks will have the benefit of lowering the cost of overall operations and faster transactions.
Banks possess a lot of private data that can be used to understand the needs of users of different backgrounds.
Banking software testing can leverage the power of blockchain and AI (Artificial Intelligence) and helps banks to survive tough competition with other financial as well as non-financial platforms. Digital transformation in banking can automate numerous manual operations, which in turn enhances customer satisfaction. With the combination of Artificial Intelligence (AI) and the Internet of Things (IoT), banks can collect and analyze personal data of the customers and create more personalized offers for clients.
In today’s time, bank applications have become more complex and interconnected to provide flexibility, transparency, and speed. Quality assurance helps to bring best out of this banking digital transformation with the help of reliable testing tools while achieving high-cost savings through fully managed or automated testing services.
Model-Based Testing Workbench
This methodology is designed to speed up the creation of test scripts. In traditional methods, testers manually create each test script and use cases. The task requires great testing skills and domain knowledge. With the usage of model-based testing tools, the banking sector can automatically generate test cases from models to describe the application. A single model can generate multiple test cases in a short time.
Driving Quality and Performance with Robust Testing
Quality assurance (QA) initiatives like choosing the right kind of banking softwares testing tools for help in delivering quality products that effectively meet customer needs and preferences. With the help of Optimized QA processes and right test management programs, banks can reduce post-production defects. Quality assurance helps in increasing the reliability & security of core systems, and one can reduce the cost with the help of automation. In banking, an end-to-end testing methodology is required. It should include Omni-channel testing, continuous testing, cybersecurity testing, customer experience testing, banking app testing along with stress and load testing. QA ensures complete coverage of all business requirements as well as the functional aspect of the application.
Overcoming challenges in testing
For Banking in the digital age, Quality Assurance is indispensable for the smooth and hassle-free running of financial applications. But there are several challenges. Let us look at a few of them:
Complexity
Banking apps are complex; there are legacy systems. They require constant updates as well to stay in the competition and regulatory requirements. For a typical web or mobile app, there are 200 test scripts that need to be created and run. Thus the whole process is complicated.
Then there is added pressure of reducing the time to market. The end result is there is less time for testing, which increases the risks to quality.
Heterogeneous testing environment: QA teams test on a huge number of devices, operating systems (OS) or browser combinations such as Windows, Safari on an iPhone7, etc. To test these combinations, you require a large number of testers in manual testing. Also, as ‘wearable banking’ is around the corner, testers will need to test banking apps on Apple Watch and Google Glass to serve digitally-savvy consumers.
Regression Testing and Data Security
In quality assurance digital transformation, it is challenging to perform cost-effective regression testing over an application’s lifecycle. It is necessary to ensure that testing takes into account system integration and test data usage is as per data confidentiality norms. To tackle these challenges, an effective test suite is mandatory, along with effective and robust test data management.
One of the best ways to deliver high-performance financial applications is to embrace test automation. Automated tests are great for repeatedly; they provide exceptional accuracy and speed and helps in identifying errors in the starting phase of the development cycle. Developing a long term testing framework and roadmap helps in the proper management of resources. Following are key challenges that internet banking software testing companies face:
Concept of Omni-Channel Banking — In financial markets, omnichannel or branchless banking concept is gaining traction. It is challenging for QA to ensure end-to-end functionality and highly effective mobile applications.
In financial markets, omnichannel or branchless banking concept is gaining traction. It is challenging for QA to ensure end-to-end functionality and highly effective mobile applications. Compliance with Security Standards — Banking institutions are required to comply with FATCA (Foreign Account Tax Compliance Act), and AML (Anti-Money Laundering), PCI DSS (Payment Card Industry Data Security Standard). They ensure that there are no frauds. In banking, the QA team needs to take security compliance seriously.
Banking institutions are required to comply with FATCA (Foreign Account Tax Compliance Act), and AML (Anti-Money Laundering), PCI DSS (Payment Card Industry Data Security Standard). They ensure that there are no frauds. In banking, the QA team needs to take security compliance seriously. Failed Transactions in Banking Porta ls — When there is a problem in banking portal transactions, entire system collapse. The application needs to undergo repeated load tests as in banking apps; multiple transactions happen at a given point. The reliable performance of banking app is a crucial factor.
ls — When there is a problem in banking portal transactions, entire system collapse. The application needs to undergo repeated load tests as in banking apps; multiple transactions happen at a given point. The reliable performance of banking app is a crucial factor. Traceability — Testers should have an Application Lifecycle Management (ALM) solution to bridge the gap between requirements, test controls, tasks, and releases.
Solutions For Banking Applications
To solve these issues, the following solutions must be used:
Well-defined end-to-end testing strategy
Visual planning boards, real-time dashboards, etc. to ensure proper planning and execution.
UI and UX testing for multiple users
Application testing in terms of performance, functionality, and security
Overall performance testing to cater all workflows
Agile and speedy software solutions to stay ahead in the competition in financial markets.
Final Words
Banking in the digital age is all about customers getting the best experience and ensuring that customers get the best experience quality assurance in digital work plays a vital role.
Original Post: BugRaptors | https://medium.com/dev-genius/role-of-qa-in-banking-digital-transformation-35ac9f86fe1f | ['Bally Rezed'] | 2020-11-02 14:30:37.855000+00:00 | ['Quality Assurance', 'Banking Software Testing', 'Banking Technology', 'Banking App Testing', 'Digital Transformation'] |
1,421 | How To Check If a List Is Empty in Python | How To Check If a List Is Empty in Python
Learn multiple techniques to check for an empty list
Photo by Andrew Neel on Unsplash
There are many options to check if a list is empty. Before getting into the syntax of the solutions, let’s lay out the different factors involved in deciding which method we want to use.
The expression we craft can fall into one of two camps, an explicit comparison to an empty list or an implicit evaluation of an empty list. What does that mean? | https://medium.com/better-programming/how-to-check-if-a-list-is-empty-in-python-b29faecaadc1 | ['Jonathan Hsu'] | 2019-11-17 23:21:55.839000+00:00 | ['Software Development', 'Programming', 'Python', 'Data Science', 'Technology'] |
1,422 | How to organize a RoadShow to present Tutellus in 11 cities, 8 countries and 3 continents in just 20 days | How to organize a RoadShow to present Tutellus in 11 cities, 8 countries and 3 continents in just 20 days Nacho Hontoria Follow Apr 24, 2018 · 4 min read
We have not gone mad. We are in full #TutellusRoadShow and in these months before summer it’s essential to make a spread of the Tutellus project and reach as many people as possible with physical events, with the aim of creating Community and present the project, face to face, to thousands of people .
The idea is to use the Community and the network of partners that we have created to be in all those cities, at the same time, optimizing costs and maximizing the generated noise. It is about combining own assistance with Community, Partners and Advisors.
1. Lithuania, April 25th
We are gonna start tomorrow the 25th in Lithuania, in a very interesting Crypto event. In this case we have the help of @_javigon, partner of Avolta, one of our main partners in the project. If you are in Vilnius, you can meet with him!
2. Prague, May 4th
Our CEO, Miguel Caballero, will be there running on the marathon and presenting Tutellus in one-to-one mode. So if you’re in city and want to talk about Tutellus (or even the marathon) just tell it to us.
3. Santiago (Spain), May 4th
The same day we can see each other in Santiago de Compostela, Galicia, at Centro Socio Cultural As Fontiñas. Save the date: 4th May, 19h. Our Ambassador is Héctor Cores and the event is almost full capacity!
4. London, May 8th
Let’s go to London again, where our CEO will present Tutellus in the event managed by Coinsilium and StartupToken: “How decentralization is changing the educational industry”.
5. Buenos Aires, May 8th
Same day, but in the other side of the World, in Buenos Aires, we have another event. This time is managed by our local Ambassador Juan Martin. Come with us if you are in the city!
6. Barcelona, May 10th
Similar to the event in London, we’ll be in Barcelona. 10th May at 19 pm. Register here!
7. Consensus (NYC), May 14th
Keep on the #TutellusRoadShow! 16th of May we’ll present Tutellus in Consensus, maybe the biggest blockchain even in the year. We hope to see a lot of friends there, like @NEMofficial and @kstellana full.
8. Monaco, May 16th
If the Concorde was in active we would use it this day, because we’ll present Tutellus in Mónaco the same day than in New York. We’ll be at MIB (Monaco International Blockchain) with the aim to present our project to the community.
9 & 10. Valencia (Spain) y Caracas (Venezuela)
And two more cities. Our local Ambassadors in Valencia (Spain) and Caracas (Venezuela) are confirming new events in their cities during May, so stay tunned!
11.- Málaga — Event has been cancelled
Our local Ambassador is working hard to organize another new event in the capital of the Sun Coast. Stay tuned!
And that’s the plan for the following three weeks. After it, we’re preparing a long trip in Asia, that we will be able to make compatible with more meetups and local events. If you want to be one of them, you can apply here: tutellus.io/ambassadors :) | https://medium.com/tutellus-io/how-to-organize-a-roadshow-to-visit-11-cities-in-8-countries-and-3-continents-in-just-20-days-f5aa5ce020e3 | ['Nacho Hontoria'] | 2018-04-28 12:37:40.824000+00:00 | ['Blockchain', 'Tutellus', 'Events', 'Education', 'Blockchain Technology'] |
1,423 | Covid-19 Will Accelerate the AI Health Care Revolution | Disease diagnosis, drug discovery, robot delivery — artificial intelligence is already powering change in the pandemic’s wake. That’s only the beginning.
Dr. Kai-Fu Lee
ON NEW YEAR’S Eve the artificial intelligence platform BlueDot picked up an anomaly. It registered a cluster of unusual pneumonia cases in Wuhan, China. BlueDot, based in Toronto, Canada, uses natural language processing and machine learning to track, locate, and report on infectious disease spread. It sends out its alerts to a variety of clients including health care, government, business, and public health bodies. It had spotted what would come to be known as Covid-19, nine days before the World Health Organization released its statement alerting people to the emergence of a novel coronavirus.
BlueDot’s role in spotting the outbreak was an early example of AI intervention. Artificial intelligence has already played a useful but fragmented role in many aspects of the global fight against the coronavirus. In the past months, AI has been used for prediction, screening, contact alerts, faster diagnosis, automated deliveries, and laboratory drug discovery.
As the pandemic has rolled around the planet, innovative applications of AI have cropped up in many different locations. In South Korea, location-based messaging has been a crucial tool in the battle to reduce the transmission of the disease. Nine out of 10 South Koreans have been getting location-based emergency messages that alert them when they are near a confirmed case.
In China, Alibaba announced an AI algorithm that it says can diagnose suspected cases within 20 seconds (almost 60 times faster than human detection) with 96 percent accuracy. Autonomous vehicles were quickly put to use in scenarios that would have been too dangerous for humans. Robots in China’s Hubei and Guangdong provinces delivered food, medicine, and goods to patients in hospitals or quarantined families, many of whom had lost household breadwinners to the virus. In California, computer scientists are working on systems that can remotely monitor the health of the elderly in their homes and provide alerts if they fall ill with Covid-19 or other conditions.
These snapshots of AI in action against Covid-19 provide a glimpse of what will be possible in the various aspects of health care in the future. We have a long way to go. Truth be told, AI has not had a particularly successful four months in the battle of the pandemic. I would give it a “B minus” at best. We have seen how vulnerable our health care systems are: insufficient and imprecise alert responses, inadequately distributed medical supplies, overloaded and fatigued medical staff, not enough hospital beds, and no timely treatments or cures.
Health care systems around the world — even the most advanced ones — are some of the most complicated, hierarchical, and static institutions in society. This time around, AI has been able to help in only pockets of excellence. The reasons for this are simple: Before Covid-19 struck, we did not understand the importance of these areas and act accordingly, and crucially as far as AI is concerned, we did not have the data to deliver the solutions.
LET’S LOOK TO the future. There are two grounds for optimism.
The first is that data, always the lifeblood of AI, is now flowing. Kaggle, a machine-learning and data science platform is hosting the Covid-19 Open Research Dataset. CORD-19, as it is known, compiles relevant data and adds new research into one centralized hub. The new data set is machine readable, making it easily parsed for AI machine learning purposes. As of publication, t here are more 128,000 scholarly articles on Covid-19, coronavirus, SARS, MERS, and other relevant terms.
The second is that medical scientists and computer scientists across the world are now laser-focused on these problems. Peter Diamandis, founder of the XPrize Foundation, estimated that up to 200 million physicians, scientists, nurses, technologists, and engineers are now taking aim at Covid-19. They are running tens of thousands of experiments and sharing information “with a transparency and at speeds we’ve never seen before.”
The Covid-19 Research Challenge, also hosted on Kaggle, aims to provide a broad range of insights about the pandemic, including its natural history, transmission data and diagnostic criteria for the virus, and lessons from previous epidemiological studies to help global health organizations stay informed and make data-driven decisions. The challenge was released on March 16. Within five days it had already garnered more than 500,000 views and been downloaded more than 18,000 times.
In the first month of the outbreak in China, Alibaba released an AI algorithm trained on more than 5,000 confirmed coronavirus cases. Using CT scans, it can diagnose patients in 20 to 30 seconds. It can also analyze the scans of diagnosed patients and quickly assess health declines or progress, based on signs like white mass in the lungs. Alibaba opened its cloud-based AI platform to medical professionals around the world, working with local partners on anonymous data for deployment, including modules for epidemic prediction, CT Image analytics, and genome sequencing for coronavirus.
With the amount of medical data in the world now estimated to double every couple of months or so, health care was ripe for AI — even before the virus struck. A 2019 study covering 19 countries’ artificial intelligence health care markets estimated a 41.7 percent compound annual growth rate, from $1.3 billion in 2018 to $13 billion in 2025 in six major growth areas: hospital workflow, wearables, medical imaging and diagnosis, therapy planning, virtual assistants, and lastly but most significantly, drug discovery. Covid-19 will accelerate those trends rapidly.
Deep learning — the capability to process massive, multi-model data at high speeds — presents one of the most far reaching opportunities for AI. Deep neural networks, a subtype of AI, have already been used to produce accurate and rapid algorithmic interpretation of medical scans, pathology slides, eye exams, and colonoscopies. I see a clear roadmap of how AI, accelerated by the pandemic, will be infused into health care.
THE POTENTIAL GOES beyond diagnosis and treatment. Getting appointments, paying insurance bills, and other processes should be much less painful. AI combined with robotic process automation can analyze workflows and optimize processes to deliver significantly more efficient medical systems, improve hospital procedures, and streamline insurance fulfillment. To address the pandemic, AI could automate and accelerate pre-diagnostic inputs by crunching texts, languages, and numbers at machine-level quantity and precision.
With sufficient data as a foundation, AI can also establish health data benchmarks for individuals and for population. From there, it’s possible to detect variations from the baseline. That, in turn, positions us to identify potential pandemics early. It’s not easy. Systems need to be connected so that early alert and response mechanisms can be truly effective. That appeared to be a shortcoming in the early days of the coronavirus’ outbreak.
There are already huge opportunities for using AI models and algorithms for new drug discovery and medical breakthroughs in genomic sequencing, stem cells, CRISPR, and more. In today’s pharmaceutical world, there is a hefty price tag to developing a treatment. A huge part of this cost is eaten up by the money and time spent on unsuccessful trials. But with AI, scientists can use machine learning to model thousands of variables and how their compounded effect may influence the responses of human cells.
These technologies are already being used in the hunt for a Covid-19 vaccine and other therapies. Insilico Medicine, a Hong Kong-based AI company specializing in drug discovery, was among the first companies to react to Covid-19. The company used its generative chemistry AI platform to design new molecules to target the main viral protein responsible for replication. It published the molecules on February 5. AI and machine learning are ushering in an era of faster and cheaper cures for mankind. Drug discovery and the pharmaceutical industry as a whole will be revolutionised.
EARLY ONE WINTER morning in the year 2035, I wake up and notice a bit of a sore throat. I get up and walk to the bathroom. While I brush my teeth, an infrasensor in the bathroom mirror takes my temperature. A minute after I finish brushing my teeth, I receive an alert from my personal AI physician assistant showing some abnormal measurements from my saliva sample and that I am also running a low fever. The AI PA further suggests that I take a fingertip needle touch blood test. While the coffee is brewing, the PA returns with the analysis that I might be coming down with the flu, one of the two types around this season. My PA suggests two video call time slots with my family doctor, should I feel the need to consult her. She will have all the details of my symptoms when I make the call. She prescribes a decongestant and paracetamol which is delivered to my door by drone.
That future is not as far off as it seems. Soon, as medical science and computer science further converge, we will move into an era of fully autonomous AI when we may expect people to choose wearables, biosensors, and smart home detectors to keep them safe and informed. And, as data quality and diversity increase from the wearables and other internet-of-things devices, a virtuous cycle of improvements will kick in.
In this world a novel coronavirus could be tracked, traced, intercepted, and cut off before it got going. In perhaps 15 years, many of us will have AI personal assistants in our households to keep us supported for our families’ day-to-day health issues. Robots or drones will deliver medication to our doors. If a surgery or some other medical intervention is needed, usually it will be a robot performing or assisting a human surgeon or doctor.
In this future doctors and nurses will focus more on the human tasks that no machine can do. The medical professionals or compassionate caregivers will combine the skills of a nurse, medical technician, social worker, and even psychologist. They will operate the AI-enhanced diagnostic tools and systems, but they will concentrate on communicating with patients, consoling them in times of trauma, and emotionally supporting them through their treatment.
In all this there are the key issues of privacy and data protection, particularly when it comes to patients’ records. It would be irresponsible to let useful data sit in their own isolated compartments, instead of extracting their usefulness to serve the progress of our societies. I am a big proponent of using innovative technological solutions to solve newly arisen technology issues, and the good news is that there has been progress made in federated learning, also known as distributed learning. In this framework, patients’ data is stored and never leaves their host health system or hospitals or personal devices, as machine learning models are trained from separate datasets, processed and combined subsequently. Technologies, such as federated learning, homomorphic encryption, and trusted hardware execution environments would further ensure data is computed, transmitted, and stored to meet preferred settings, as privacy requirements vary around different countries and cultures.
IF NOTHING ELSE, Covid-19 has proven that our shared challenges call for AI that recognizes how intertwined our destinies are. In the past global collaboration has led to the eradication of smallpox and the near-eradication of polio. As we work toward the goal of mitigating, treating, and eradicating the pandemic, it is clear that public health does not stop at national borders. Medicine is an arena where every country will benefit from building on, and with, others’ research. The whole world’s data will generate the most robust insights into health and disease.
AI will help ensure we will be better prepared for the next pandemic. It will need medical scientists, AI scientists, investors, and policy makers to collaborate. Venture capital is going to pour into healthcare and provide fresh impetus and focus for smart entrepreneurs and researchers. And, perhaps, as our brightest minds work on this challenge together, we can emerge acknowledging that our common enemy is not each other but a virus. It will take a planet to move our global healthcare systems to the next level.
Kai-Fu Lee, Ph.D., is the Chairman and CEO of Sinovation Ventures.
The article first appeared on WIRED Backchannel: https://www.wired.com/story/covid-19-will-accelerate-ai-health-care-revolution/ | https://kaifulee.medium.com/covid-19-will-accelerate-the-ai-health-care-revolution-a307458e7b7b | ['Kai-Fu Lee'] | 2020-07-21 07:29:28.477000+00:00 | ['Covid 19', 'China', 'Artificial Intelligence', 'Healthcare', 'Technology'] |
1,424 | Pyramid of Doom — the Signs and Symptoms of a common anti-pattern | Pyramid of Doom — the Signs and Symptoms of a common anti-pattern
with some tips on how not to code yourself into a corner
Anti-patterns. They are the bane of many developers that’s had the misfortune of meeting one. The pyramid of doom is often one that a lot of new JavaScript developers write. Most of the time, it’s written in innocence with no code janitor to tell them otherwise.
I’ve written pyramids of doom in my early days and I’ve experienced them from others. The anti-pattern begins its seeds as a few levels of functions , loops and if else statements — until the levels turn into an endless maze of curly braces and semi-colons that somehow magically work on condition that no one touches it.
What exactly is a pyramid of doom?
A pyramid of doom is a block of code that is so nested that you give up trying to mentally digest it. It usually comes in the form of a function within a function within a function within a function of some sort. If not, then it’s a loop within a loop within a 3 level nested if statement.
When faced with a pyramid of doom, we often ignore the code and just begin again. However, sometimes that’s not feasible because the entire application is written in this anti-pattern style.
It’s an anti-pattern because there is no pattern. It is simply the transmission of a developer’s train of thought as is without any categorization or organization.
Here’s an example of a multi-level beginning of a potential pyramid of doom:
function login(){
if(user == null){
//some code here
if(userName != null){
//some code here
if(passwordMatch == true){
//some code here
if(returnedval != 'no_match'){
//some code here
if(returnedval != 'incorrect_password'){
//some code here
} else{
//some code here
}
} else {
//some code here
}
} else {
//some code here
}
} else {
//some code here
}
}
}
There are other ways to code pyramids of doom such as through nested anonymous functions and callback nesting. In fact, if you nest something enough you’ll be sure to create a pyramid from it.
Here are some signs and symptoms that often lead to pyramids of doom and how to cure them.
Lack of planning
Sometimes, developers hit their favorite code editor and start tapping away. It’s alright. We’ve all done it. We take a quick look at the requirements and if there is none, we make it up as we code.
This results in unplanned functions, loops, and statements that need to be written somewhere. Why not just put it right where you’re coding right now?
As a result, we end up building our application in an ad-hoc manner that results in unnecessary code — sort of like if you were to build a house without a plan and just keep rocking on back to the shop to buy more timber. Next thing you know, your budget is blown because you bought too much of the wrong things and you can’t return it.
Cure: pseudo code out your plan
I have my juniors do this all the time to prevent wasted time trying to unravel the nest they’ve written. They don’t get to code anything unless they show me a plan first — even if it’s scribbled down with pen and paper with cross-outs and coffee stains.
The point of the plan is to help structure your thoughts and ensure that you understand the direction of your code. When you are able to do this, it allows you to pre-plan what kind of functions you’re going to write, how your objects are structured and if your inheritance is logical in terms of classification and flexibility.
Basic syntax knowledge only
Many of us jump right into coding because we’re excited. We’ve just figured out how to do a few things and it works. We end up with a pyramid of doom because we don’t know how else to solve the problem.
In this situation, we don’t recognize our anti-pattern because we don’t know any better. However, you can’t build large and complex applications with just basic functions.
Cure: check out OOP JavaScript, inheritance patterns and promises
Upgrade your skills by learning the higher level coding paradigms like object oriented. Although JavaScript is often presented as a series of functions, they are all objects with rules and inheritance patterns.
Understanding the concept of promises will also help you flatten your chain when it comes to writing callbacks and prevent your code from blimping out if something goes wrong. Throw errors when things go wrong so you know when and where things happened rather than having to sit for hours tracing through your code.
Complicate is smart
When starting out and without much guidance, we often create large and complicated blocks of code. Some people do it because they think that’s how code is supposed to be: complicated.
We get this misconception that the harder the code is to understand, the smarter we are for creating such a beast. But that is often the sign of inexperience and hubris.
It doesn’t matter how many months or years you’ve been coding. If your main aim is to make the code as complicated as possible, then it means you’re not versed in programming paradigms. When things get complicated and intertwined, the code becomes much more fragile and prone to breakage. There is no resilience to change and decay at a faster rate.
Cure: Simplify and learn your SOLID principles
Flatten your code and learn to use callback methods instead of nesting functions. Use SOLID principles to guide your choices and the relationships you create.
If you start to see more than one level, you should stop and evaluate your code choices. Most of the time, you can abstract it out — even if you think you’re only going to use it once and never again.
Fix it later mindset
We often tell ourselves that we’ll do it later — but from past experience, never often never materializes. It happens all the time. You promised yourself or get given the promise that you’ll have time at a later date to fix it. But that time never happens. It gets pushed back. It gets re-prioritized. Next thing you know, you’re stuck with a smelly piece of fragile code that you’ve forgotten how it works.
Not only that, you’ve just spent your time further entrenching bad patterns by writing more of the same.
Cure: do it now
It might take more time initially but once you get the hang of how to write flat and clean code, you get better at it. Every time you refactor your own code as you’re working on it, the better you become at detecting smelly code and anti-patterns as you write them.
It helps you build the muscle memory. Even if no one will ever see your code, it is best to keep applying SOLID principles, cohesive design and flat levels. Good patterns are as much a habit as anti-patterns. Name your constants. Abstract your SQL commands. Keep your scopes simple and contained. Avoid anonymous functions as callbacks.
Love ’em globals
Global variables are easy to create and deal with when you’ve got nested code. But bad things happen when you litter your code with them. It might feel safe to do so when your application is small. However, as the code base grows and multiple people start working on it, things can get complicated really quickly.
You don’t know what side effects you’ll have if you modify a global. You’ll need to go variable hunting and figure out how it’s going to impact the rest of the application. You don’t know exactly what it’s going to break, how things are going to break and if there’s going to be a cascading effect.
Then there’s your nested pyramid to deal with. If you need to set a global to use inside your function within a function, then you need to stop right there and rethink your game plan.
Cure: use more local variables
When you use more local scopes, your code becomes isolated and less fragile to change. Your logic gets contained and it forces you to rely on the variables that are immediately available within your scope rather than what’s external.
When you’re not relying on global variables, you can pass states and results between different functions through return rather than worry about how your global state is going to impact on other functions.
Having global variables isn’t bad but they’re best kept in the realm of immutables where things aren’t expected to change.
Final words
You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains. - Steve Jobs
If you find yourself working with a function that feels overly complicated, chances are, it is complicated.
Pyramids of Doom got its name because it only takes one break in the nest to have it collapse into itself. You might be able to put struts and stickers to prevent its downfall but the bigger your pyramid, the bigger the collapse.
Beautiful code is complexity simplified. It takes more effort, thinking, and skills upfront to create something that is easily understood by others. But your investment will pay off in the long run with a much more robust piece of code that ages gracefully. | https://medium.com/madhash/pyramid-of-doom-the-signs-and-symptoms-of-a-common-anti-pattern-c716838e1819 | ['Aphinya Dechalert'] | 2019-03-26 01:24:14.796000+00:00 | ['Technology', 'Programming', 'Software Development', 'Productivity', 'JavaScript'] |
1,425 | Automating Death and Destruction | 1. Drone wars…
These are the suspect accounts of the Iranian Islamic Revolutionary Guard Corps (IRGC). Some reports have Fakhrizadeh leaving the car before being shot and killed. Some have his bodyguard leaping in front of Fakhrizadeh and being shot four times. Other reports claimed a gunfight erupted between Fakhrizadeh’s bodyguards and the assailants.
IRGC Deputy Commander-In-Chief Ali Fadavi claimed an automated gun killed Iran’s top nuclear scientist, and said the gun had a camera which used facial recognition artificial intelligence (AI), and was controlled by satellite. Fadavi claimed this robotic gun fired thirteen times from 150 meters and shot Fakhrizadeh without harming his wife. Conveniently, the unmanned AI gun and the Nissan truck detonated after the attack, leaving no evidence.
Aftermath of the killing of Mohsen Fakhrizadeh (photo by Fars, Wikimedia Commons)
We don’t know if any of these accounts are accurate or reliable in any way.
The key point is that either the story is true and the technology enabled an AI assassination, or the story is false but plausible because the technology is now cheap and capable.
Another essential point is that the economics of the technology are so favorable that remote, automated weapons have trickled down to the most impoverished nations and found their way into their conflicts.
The disputed Caucasus region known as Nagorno-Karabakh has been fought over for millennia. But the conflict has taken a new turn in 2020 as the Azerbaijani army fought the local Armenians over control of the mountainous territory which occupies less than 2,000 square miles.
Azerbaijani drones, supplied by Turkey and Israel, targeted Armenian and Nagorno-Karabakh soldiers and equipment, destroying their defenses, and forcing them to sign a truce. These drones, according to a senior analyst at the Australian Strategic Policy Institute, are a “potential game-changer for land warfare”.
Azerbaijan benefitted from Turkey’s experience using drones against Soviet equipment in the Syrian civil war, and The Economist cited these conflicts as pointing to the future of warfare.
The cost of drones continues to drop rapidly as their capabilities and technology increase, making them affordable to small countries like Azerbaijan. When the U.S first deployed drones in the Gulf War in 1991, they were expensive cutting-edge technology. The first Predator drones armed with missiles to strike enemy targets were used in 2001–2. These drones cost about $4 million each in 2010, and were discontinued in 2018.
An article by Major Zachary Morris in the U.S. Army University Press noted that between 2002 and 2016, the U.S. killed about four thousand enemy combatants using drones in unconventional battlefields.
The unconventional battlefield emphasizes a key weakness of drones. The U.S. selected these combat areas for drone use because the airspace was uncontested. The U.S. military had air superiority — they ruled the skies. Morris noted that in the few instances when drones engaged in combat, they lost. This included the only known air-to-air combat where a manned fighter shot down a Predator drone in 2003, and in 2015 when a Predator drone was shot down by Syria’s “dilapidated air defense system”.
Morris also emphasized the cost/capability tradeoff for drones. As an example, he noted that an MQ-9 Reaper drone costs about $30 million each in 2011, more than half of the cost for an F-16 fighter at about $55 million. The F-16 fighter represents a much more versatile and capable combat aircraft, carrying four times the payload as the Reaper, and able to excel in missions the Reaper simply cannot do.
However, for wars among smaller countries without significant air assets or defense systems, like Armenia and Nagorno-Karabakh, drone warfare has clear benefits. The Israeli and Turkish drones purchased by Azerbaijan cost approximately $5 million each. Less than a tenth the cost of an F-16 fighter, and even less than the cost to train a pilot. | https://medium.datadriveninvestor.com/automating-death-and-destruction-816d9f824683 | [] | 2020-12-14 16:39:33.259000+00:00 | ['Autonomous Cars', 'War', 'Technology', 'Science', 'Economy'] |
1,426 | The Bitcoin Power in Historic Times | “Dreams of the future are always better than the history of the past”
In the last decades, humanity has evolved so rapidly that it was hard to even dream of such a jump a hundred years ago. From letters and carriages to the first cars and land-line phones, from the first cars and land-line phones to masterpieces of mechanical and electronic engineering, today we seem to think we have achieved all, but the truth is… we are still living primitive times. Just close your eyes and imagine how the world will look like a hundred years from now.
Innovators around the world have lead the humanity through tremendous evolution in time. It is well known and recorded by historians that we as humans had the best performances and technological advancements in times of crisis. As some of you may know, the first inter-continental ballistic missile developed by the Russians was the foundation for all space ships we are sending today to explore the vast universe. From every bad thing, a greater good emerged, hope has not and will never fade away.
There is no doubt that today we live historic times, tales of these times will live for hundreds of years and people of the future will try to imagine how the world looked like in these days behind all of our movies and insta-stories and live videos on facebook. This is not the first challenge for us, there have been many, and together we managed to overcome them all. At this turning point, the crisis of both health and economic segments for every country around the world will undoubtedly lead to innovations that will forever change the way we see our world.
The Time of Change Is Here!
This time, all could be different. While doctors and scientists around the world are on a rush for the most needed cure for COVID-19, which will surely be discovered, we are also facing a revolution of technological advancement that started more than 11 years ago. This innovation has led to an unprecedented level of freedom, security and reliability to be possible in our society and at the same time it could help to avoid a financial disaster in the near future.
Bitcoin, together with the Blockchain technology have sparked a movement that took everybody by surprise. From governments to simple people, they are all witnessing a change that is inevitably happening. The concept of money and trust have gained a new understanding, making it possible today to deal with one another without implying trust or fear of manipulation.
If you studied the blockchain technology, you already know by now that in a decentralized system there is little to no room for corruption, therefore the freedom for communication and exchange is today greater than ever. If you wish to come in help for someone in need from the other side of the planet, you can now do it in the split of a second. It is needless to say that if a humanitarian campaign would be started through blockchain, people around the world could join forces at any time to rapidly respond and help those in need.
Besides the health crisis we are facing now, an even bigger threat may be around the corner, the collapse of an artificial financial system, which has proven so many times to be inefficient and corrupt. The system on which our society relied for so much time was never meant to be irreplaceable and it has served its time. We now have the necessary means to move to the next stage.
Bitcoin and Blockchain are not here just to make you or others rich, these technologies are here to take humanity into a different era, an era where “I don’t have all the details” will no longer be an issue. An era where a doctor will know every health risk that you have and he will be able help a lot more than he ever could. An era where information is not missing anymore. An era where corruption is less and truth will be much easier to see. An era where transparency will exist where needed and at the same time privacy will not be violated so easily. An era where everything can and will be better.
From the early years until today, mankind struggled to keep the information in safe hands, for future generations to know their identity, and they bravely protected every book, letter or note even with the cost of their lives, so it will never be forgotten. Today we have reached a point where we have all the means to make sure that will never happen. We are stepping together into the new world, the young and the elders, we are in this together.
Bitcoin and Blockchain have brought more power into the hands of many than any other revolution or leader could do before. More innovations will come from this, as this is only the beginning. Today we are not talking about wars and invasions, we are talking about what may be the most peaceful transition in the history of mankind. What we are choosing now will shape the world for our descendants and we must pave the road for a better future. We have the information from so many historic events, it is our duty to learn from every mistake and never repeat it, so that the ones who follow will remember us proudly as the pioneers of the new free world they will live in.
So much more can and will be said about this, doubt it not! Bitcoin is not just a new technology anymore, it is a movement. More capable technologies will surely emerge, but the story of Bitcoin and its power will never fade. The world is changing and we must do the same. The dreams we have today are the reality of tomorrow, the evolution is in our nature, it has always been! Now, there is only one thing left for me to ask you…
The future is here, are you here too?
Long Live Bitcoin, Long Live The Change!
A visionary article about tomorrow.
-The CryptoSmurf
Stay around for more articles about Bitcoin, Blockchain and other innovative projects within the industry!
Support The CryptoSmurf!
Follow — Share — Donate
Your support will help to increase the quality and activity of this page, donations are accepted and welcomed to the addresses below:
1MyxMUhgueRq3H5LaLAyzUE4g5HJjC8bbC (BTC)
0xEf6d7Cd77c73c64261557C27684285Ba98cD9Ced (ETH)
Thank you,
The CryptoSmurf! | https://medium.com/@thecryptosmurf/the-bitcoin-power-in-historic-times-f8373133e1e6 | ['The Cryptosmurf'] | 2020-05-05 16:41:10.274000+00:00 | ['History', 'Bitcoin', 'Blockchain', 'Blockchain Technology', 'Revolution'] |
1,427 | Ashton Eaton: From Olympic Gold to Tech | Two-time Olympic gold medalist Ashton Eaton made an art out of breaking his own world records during his decorated decathlon career. He started out at the University of Oregon, where he was a five-time NCAA champion and graduated with a degree in psychology in 2010. Soon after launching his professional career, Eaton set two world records in 2012 and, later that year, achieved his lifelong dream of winning Olympic gold at the London summer games. In 2016, he won his second Olympic gold medal in Rio de Janeiro, becoming the third athlete to win back-to-back golds in the decathlon.
In early 2017, after months of reflection, Eaton announced his retirement from the sport. He and his wife, Olympic bronze medalist Brianne Theisen-Eaton, prepared to move to San Francisco, where they are now working on a new technology and wellness venture. In December, Eaton joined as a new member of South Park Commons and sat down with SPC founder Ruchi Sanghvi to talk about winning, the mental and physical routines that led him to victory, and how he plans to keep beating his own personal records, even off the track.
Great expectations can do great harm
For Eaton, winning Olympic gold was more about personal satisfaction than triumph. The year before his first Olympic Games, in 2011, he was the unquestioned favorite to win the World Championships. It was Eaton’s first year as a professional athlete, and his trial run for Worlds was the best score in the world by a large margin. But despite these expectations, Ashton came in second in the competition.
Eaton recalled standing on the podium to receive his silver medal as a uniquely motivating moment in his life: “I thought, this is amazing. Then, I looked up at the guy on the first-place podium and thought, but if this is amazing, I wonder what that feels like. I wanted to do everything I could to get there.” For Eaton, it was a massive turning point. “I know that if I had won that world championship, I would not have won the Olympics.”
Eaton never lost from then on out — freeing himself from expectation allowed him to focus on skill and personal fulfillment above anything else. “I’ve come to understand expectation versus reality. When I expected to win, my expectation made me do worse.”
“When I was younger, I used to look at the Olympians and try to understand how they were doing what they were doing. I practiced and trained with the curiosity of what it would take to get to that level.” Today, Ashton looks at founders and entrepreneurs who are positively contributing to the world with the same mentality. “I don’t know what I’m capable of,” he said, “but my mentality is the same as it was when I was a young athlete. I have no expectations, just wonder about what’s possible.”
Prepare for uncertainty, then embrace it
In An Astronaut’s Guide to Life on Earth, Canadian astronaut Chris Hadfield describes how his NASA training led him to embrace a “success-and-survival” philosophy that taught him to “prepare for the worst — and enjoy every moment of it.” Eaton adopted Hadfield’s strategy as a way of preparing himself for competition: “One of the things I used to do, the morning or night before the Olympics, was think of all the things that could go wrong, and feel what that would feel like,” he said. “I thought about false starting, strong headwinds, two fouls in the long jump, getting injured… it gave me more leniency on myself to be able to operate at a high level in uncertainty. Because at the end of the day you can only control the controllables.” When something would occasionally go wrong during a race, Eaton had already anticipated it — and was primed to move forward.
You don’t have to win everything to win it all
The decathlon is a unique event, made up of ten constituent parts: four runs, three jumps, and three throws. It’s nearly impossible to win every event, so winning the decathlon usually means losing, too. “My event is great, because you know you’re going to lose something,” Eaton said. “I’m never going to win all ten events.”
Eaton is fast and agile. To win overall, he maximized his strength in the high-scoring speed and jumping events, and minimized his losses in the throwing events. “Even though I lost in some parts of the event, I still always had the possibility of victory.” There’s a lot of awareness required to find what you’re the best at and accept that you don’t have to win everything to win it all. Eaton has applied this same approach elsewhere, maximizing his expertise and finding partners who have different skill sets or who can help identify his personal blind spots.
Use the valleys to develop mental resilience
“A couple of years after my first Olympic gold in London, I thought about quitting for the very first time in my life. A thought just came into my mind: you’re just running in circles, what are you doing? Why are you doing this?” In trying to understand the sudden desire to quit, Eaton realized that developing mental resilience is not unlike developing physical strength. “I snapped out of it, realizing that I was just getting in my head post-win. I began to understand that the way we get physically strong is to go the weight room and break down muscles, resisting against the weight. So I thought, it has to be similar mentally. All these doubts, pressure, fears, anxieties — that’s resistance, and the more you can lift up against it, that’s how you get stronger.” That mental resistance kept him on the track and led him to his second gold in Rio de Janeiro.
“In order to have a peak, you have a valley,” he said. “For a while, I hated peaks, because I knew there was going to be a drop afterwards. But now I anticipate and look forward to them, knowing they are an opportunity to build strength and resilience for whatever comes next in my life.”
Increase the good, decrease the bad, direct the questionable
Part of Eaton’s dilemma over quitting also stemmed from the fact that at 24, he had already accomplished his goal of a gold medal and did not immediately see the point of training to repeat his success. “It was the end of the line, there was nothing after,” he said. “I’m the type of a person who doesn’t like achieving an objective of equal value twice. So to keep going, I had to reframe things.”
Around that time, he picked up a book that helped him reconfigure his approach to life as a journey toward self-improvement rather than victory. Thomas Paine’s Common Sense led him to discover the writing of Nikola Tesla, the Serbian-American physicist and engineer for whom the unit of energy is named. In Tesla’s short work, The Problem of Increasing Human Energy, Eaton found inspiration: “Tesla showed me that the goal is progress. And in a way, progress is intangible,” he said. “It requires that you move forward, even if you accomplish what you set out to do”Re-framing his goal in terms of progress, rather than a gold medal, helped Eaton decide to compete in the next Olympic games. In his training, Eaton followed Tesla’s formula for progress: “increase the good, decrease the bad, direct the questionable.”
“I plan to apply Tesla’s writings to this next venture. The inputs are different than track, but the application of Tesla’s philosophy is the same. With a company, you have all these different people, moving in different directions. The progress of your company, or your task, is the sum total of all these directions. The more you can effectively direct them, the more progress you actually make toward your goal.”
Back to a beginner’s mind
When Eaton was starting out, he trained hard and long — hours of practice and repetition to master events that were then very new to him. The time spent practicing was enormous, but so was the pay off. “It was so satisfying to see how quickly I could learn and improve.”
Eaton’s practice routines shifted as he advanced in his career. “When you develop expertise, you make fewer attempts of higher quality in practice.” Injuries tend to happen when high-level athletes try to spend more time practicing at an unsustainably high level. As someone who’s motivated by progress and improvement, it was rare for Eaton to leave a practice session feeling completely satisfied: “By the time I became an Olympian, I probably left 90% of practices feeling completely frustrated,” he said. “When you’re at the top level, the advancements are rarer, and they are lesser in magnitude,” Eaton said. “That’s why my wife and I decided to retire — because although we were getting better, we were doing so slowly. And, the things we were getting better at were not improved by much.”
“When you do something new, the improvements are super vast. And even though that initial phase is difficult and requires a lot of time, you actually can feel yourself growing. “That feeling of growing, it’s like…yes.”
Since retiring and moving to San Francisco, Eaton has been feeling a lot of yes. The career change means that he’s a beginner again, so he can look forward to newer, faster advancements as he experiments with software and hardware technology and shaping his new health and wellness venture. “That’s why I’m here,” he said. “I’m motivated by that same feeling I had early on as an athlete. I’m actually looking forward to digging into the things that are most challenging, as facing them will likely determine my success later on.” | https://medium.com/south-park-commons/ashton-eaton-from-olympic-gold-to-tech-1dad606d82bc | ['South Park Commons'] | 2018-01-17 20:38:54.307000+00:00 | ['Tech', 'Sports', 'Silicon Valley', 'Technology', 'Olympics'] |
1,428 | Venture Capital with Jim Stallings founder of PS27 in Jacksonville, FL | Jim Stallings founder of PS27
Closing the gap in venture capital was a nonfactor for a very long time in the industry of VC for several minority groups. It is no secret that minority women businesses are underfunded and for this reason, it is always a privilege to highlight those that are changing this narrative.
The man who created a massive forum for entrepreneurs to reach the breakthrough that they needed in order for their business to flourish, Is none other than Jim Stallings he is changing lives for the better every single day.
He is globally recognized for his extensive experience in transforming businesses to reach new heights and for his business leading skills.
Closing the gap in venture capital was a nonfactor for a very long time in the industry of VC for several minority groups. It is no secret that minority women businesses are underfunded and for this reason, it is always a privilege to highlight those that are changing this narrative.
The man who created a massive forum for entrepreneurs to reach the breakthrough that they needed in order for their business to flourish, Is none other than Jim Stallings he is changing lives for the better every single day. He is globally recognized for his extensive experience in transforming businesses to reach new heights and for his business leading skills.
Jim is the founder and CEO of PS27 Ventures, which is a firm whose motto is ‘empowering entrepreneurs to scale their breakthrough ideas’.
PS27 Diverse Culture
What the firm mainly does is invest in early-stage companies and startups that are showing staggering impacts on high-growth markets. The team of professionals working for the firm helps startup businesses grow and become aware of their true ability and potential.
PS27 Ventures mainly aims to invest in e-commerce, technology, sustainability companies, SaaS, healthtech, and fintech.
Jim has a Bachelor of Science Degree and a diverse career from the US Naval Academy and an astoundingly rich career background. He has worked for companies such as ROLM, IBM, and GE in the past.
While talking to Subject Thread Podcast Jim shared that he is a firm believer in innovation, the importance of diversity, technology, the importance of minority and female-led forums, and what he looks at when investing in a company.
Along with running PS27 Ventures, Jim is a director for many renowned company boards. He is also the co-founder of Smart Box, which is one of the most popular healthy snack vending companies across Florida.
Stream new episodes weekly on Spotify, Deezer, Amazon, Google Podcast, Podchaser, TuneIn, Player FM, Listen Notes, and www.subjectthreadpodcast.com | https://medium.com/@nailahlovell/venture-capital-with-jim-stallings-founder-of-ps27-in-jacksonville-fl-15c5d81ace2d | ['Nailah Lovell'] | 2020-12-15 14:04:27.940000+00:00 | ['Venture Capital', 'Technology', 'Business Development', 'Podcast', 'Entrepreneurship'] |
1,429 | These 6 Apps are the Most Helpful for Managing your Construction Projects | Technology changes on a daily basis and it can be quite challenging learning and keeping up with the best apps and tools for the construction industry. If you own or run a construction business, you will want to keep up with the latest construction apps to make you more productive and improve your bottom line.
Nowadays we can choose from numerous construction management apps that are available on the market. However, it is even more troublesome now to select the right construction app for your needs. There isn’t a one-size-fits-all app to do all that is required to help you manage and run your business. So we’ve gathered a list of what we think are the best construction apps to manage your business:
1. All-In-One Calculator (Free)
All-In-One Calculator Free is a free Android app that helps you to do construction-related calculations. You can perform unit and currency conversions and calculate percentages, volumes, areas and proportions. It includes over 75 calculators and unit converters in one. There are ads within the app, but they aren’t too intrusive.
For your iPhone, try Carpenter’s Helper Lite for all your unit calculation needs. It enables you to calculate stair lengths, roof pitch, rafter lengths and more. There’s also a Quick-Job feature that lets you quickly calculate projects, like fencing, drywall, painting and flooring.
If you are looking to create your own business, then creating your application can help you to serve more customers with fewer costs
2. iNeoSyte — Construction app for daily reports
iNeoSyte (iNeo Pro) is one of the construction apps that helps construction managers to make their field reports quickly and easily.
With a few taps and pre-defined project and report details, site managers or field supervisors can make notes, take photos and generate professional PDF reports.
These reports can be shared via email or a cloud-based system, like Dropbox, Google Drive, Box, etc. The iNeoSyte construction site app reduces the time you spend on the job site paperwork and lets you focus on the most important tasks on construction sites.
Whether you’re a principal contractor, M&E contractor, fit-out contractor or GRC Cladding installation contractor, you will find iNeo Pro app extremely useful for your trade.
3. FallSafety — Safety Management & Compliance App
Falls at construction worksites rank as the number one cause of construction deaths. Your team and employees’ safety should be one of your top priorities. If your employees are your most important asset, then you should definitely invest in their safety.
FallSafety leverages advanced technology to detect falls. It comes with different pricing tiers for different groups of people. The FallSafety Lone Worker and Worker Safety options will provide your organization with the best fall safety monitoring. It is ideal for organizations and individuals who are at risk from worker-down, working without cell service or Wi-Fi coverage, and falls.
FallSafety features two distinct alarms. The first is a countdown alarm that lets you know you have 45 seconds to say you are okay. The second siren alarm helps rescuers find your exact location.
It also automatically notifies emergency contacts, according to the safety protocols, if there is no response after a fall has been detected.
4. Hubstaff — a construction app for timesheet management
Hubstaff is a GPS time tracking software for construction teams that provides an accurate and painless way of recording work hours.
It provides insight into how construction teams use their time and automates several time-consuming tasks. With Hubstaff, inaccurate, cumbersome paper timesheets become a thing of the past.
Some of its key features include:
Time tracking, attendance scheduling, and daily emailed timesheets
Precise GPS and location tracking with geofenced job sites
Time and expense budgeting
Powerful reporting and invoicing capabilities for billing clients
Crews can track their time with the tap of a button or by simply entering the job site.
With Hubstaff, you can save time by no longer needing to track down hours, enabling you to eliminate the need for guesswork, and stay on top of project budgets with ease. | https://medium.com/@codescrum/these-6-apps-are-the-most-helpful-for-managing-your-construction-projects-eb4250d1c23f | [] | 2020-12-09 17:04:21.010000+00:00 | ['Apps', 'Projects', 'Technology', 'Building', 'Construction'] |
1,430 | Cookie poisoning leads to DoS and Privacy Violation | When the verification goes wrong.
Avatar cookie contains the URL of the avatar image. But what if we change that?
When I was hunting on cs.money, I noticed that the avatar cookie had the url for the user’s avatar on Steam. I changed the cookie to the URL of some other image and I saw that it was loading on the main page.
Until here there is nothing very special. We can load other images rather than the expected one. So what?
Okay, I tried to chat with support and… my request got blocked. After playing around with the cookie value a little bit, I tried to insert part of the steam avatar url as a parameter for my server.
Privacy Violation
Yes, I was right. The server was not checking the URL properly. The back-end verification was something like this (pseudocode):
The right verification should be:
I got a request on my server from the supporter browser. It tries to load the image url by sending a HTTP request to my server. So now I have access to supporter IP Address and User-Agent.
Denial of Service
Now, think. What if instead of the hacker server, we insert the cs.money logout URL? Bingo!
The supporter browser makes a request to the logout URL and disconnect him.
Final thoughts
It is amazing to see how a small flaw, just a wrong verification o avatar cookie, have a impact like that.
Cs.money paid me a $ 500 reward (high impact at support.cs.money). As I had already reported the problem (able to change avatar to another images) and they closed as Not Applicable, they kindly gave me a $200 bonus. You can check my report here.
Let me know if you liked, clap! | https://medium.com/@gatolouco/cookie-poisoning-leads-to-dos-and-privacy-violation-8aa773547c96 | ['Benjamin Walter'] | 2021-04-09 12:05:00.988000+00:00 | ['Infosec', 'Technology', 'Cybersecurity', 'Hacking', 'Bug Bounty'] |
1,431 | 10 Must-Read Books for Software Engineers | 10 Must-Read Books for Software Engineers
Getting better as an engineer is as much about reading code as it is about writing it
Photo by Ria Puskas on Unsplash
Besides all the great offerings of the modern world — podcasts, videos, blogs, etc. — reading a good book is still something many people don’t want to miss. I have read many good books covering tech-related things, such as software engineering, for example, and am still reading to learn new patterns and best practices.
Finding great books for software engineering is not an easy task because the ecosystem changes so rapidly, making many things obsolete after a short time. This is especially true regarding books that rely on a specific version of a programming language.
However, there are evergreens available, books that deal with meta-topics, design patterns, or general mindsets.
The following collection consists of some of the most popular, most-read books available. Books that are still relevant today and that are often recommended by senior developers to junior developers. I know that time is precious, especially for software engineers, but if you manage to read some of them it will definitely help you and your career. Note that this list is in no particular order because all of these books are equally recommendable.
Note: None of the links below are affiliate links. | https://medium.com/better-programming/10-must-read-books-for-software-engineers-edfac373821b | ['Simon Holdorf'] | 2020-02-25 10:33:00.862000+00:00 | ['Programming', 'Technology', 'Careers', 'Software Development', 'JavaScript'] |
1,432 | Aviral Singh: Reading 3D printing and doing 3D printing are two different things | Aviral Singh has been working with multinational banking corporations and the software industry. He has been with great organizations like Credit Suisse, Barclays and Citi.
MUDIT JAIN: So, you are not quite a new entrant and you’ve been trying different things, experimenting 3d printers. What makes you interested in, how did you start it with this?
AVIRAL SINGH: From an industry 4.0 perspective, I got interested in it about four years ago when I read about what 3d printing was and what 3d printing was doing and Augmented Reality and Virtual Reality have been growing on their own and including IoT. And I’ve been trying to follow what’s happening in the space for a while. And December last year was when just fortuitously in a conversation with a friend, we started talking about 3d printing and his wife happened to be a printer engineer. And the conversation started from there. And then I just got a 3d printer, earlier this year and started to play around with it. And, and it’s been very exciting because reading about 3d printing and doing 3d printing, as I’m sure, are two very different things.
So, getting a printer and then starting to experiment with various things, it has really caught my attention and I have absolutely no intention of stopping. So, I hope that gave you kind of a background of what is driving me to the side.
MUDIT JAIN: I’m sure there are multiple applications 3d printers connected with IoT, multiple apps. That’s also going to go, have you tried to experiment in that area as well?
AVIRAL SINGH: I’m starting to think about that, but what’s very interesting is if you look at the market today and what’s happening, 3d printing is an area where things are changing. Sometimes it feels like on an hourly basis, but if you look at the materials coming out, if you look at the techniques coming out, post-production techniques coming out, and one of the things that has happened, is, somebody released a full suite of software to manage, I think up to 40 printers at a time. So those things are already out there. And if you look at, how 3d printing will really work in the future, if you look at least from an industrial perspective, you have to integrate it with the floor management software of the firms. Because if you can’t do 3d printing on demand, which automatically starts things, monitors things, finishes things, takes it off the better place, so you can do your next a bill. That’s going to be very difficult to manage. So, I think the automation piece that you’re talking about, which will include everything from, IoT to monitoring, to control systems around 3d printer are going to be very important. And at the same time, if we don’t look at the AR and VR technologies from a variety of perspectives, in my opinion, AR, is going to be critical in terms of making sure that the models are appropriate for whatever we are trying to do. And VR, is going to be critical in terms of doing the modelling, right? Making sure the models look fine by themselves. And if you’re going to take something which is in the virtual world, which are our models and make it real right, then we have to go for those steps.
MUDIT JAIN: We have virtual reality augmenting, which are capable of creating a material product just from the view of it and directly into 3d printer.
So, you touched upon that, that doing 3d printing and knowing and reading about 3d printing is completely different.
Especially when we start, we started FDM printing That’s very tough, or very critical because we often fail and patience plays a critical role. Do you want to share a few of the learnings in the early days?
AVIRAL SINGH: FGM is awesome, but one of the things is it’s also slow.
A few other things, I think especially the hoppy printers. When you first get them, a hobby printer out of the box and an upgraded hobby printer are two completely different printers. So, for me, just that do-it-yourself thing has allowed me to understand how this thing actually works and you realize it’s not magic, but it’s absolutely brilliant in terms of everything that has been done over the past 20 or so years with an FDM world to make it happen. And to me, even more than just the printing has been the understanding of what printing can do for us and some of the conversations that I’ve been having.
There’s another conversation I was having with somebody who is starting, a company to make devices for people with disabilities and the first product they were thinking of doing an advanced wheelchair. So how did they prototype it? 3d printing with carbon fiber. I’ve had conversations with people and the people trying to work on the reduction of carbon emissions and net-zero. Without decentralized manufacturing that isn’t happening. If you look at the transportation industry, I think that’s going to be severely affected by this. After the supply chain disruption of last year in 2020, the US did a week-long exercise of what they would do if that happened again. And the whole basis of that was the existing 3d printing infrastructure in the US and while they haven’t made the results completely public, the part that was public while they found two or three things, which they need to improve on the whole, it seems to be working and they’re pretty comfortable.
MUDIT JAIN: Exactly. And, you touched about the medical, so that’s a massive field. Surgery planning is a very good upcoming field in India. And in India, we have been quite behind the US and other European countries, but, there’s the company, which is doing surgery planning on demand for if a hospital has to do a surgery. They just give them a call and they do on-demand printing for them and there are many services.
And also, we didn’t know that most of the jewellery and exotic designs are mostly 3d printed first and then it’s casted. So, coming back to, this 3d printing and the contribution to the industry, how are you seeing that you will be contributing to the 3d printing industry because you have garnered a lot of experience and knowledge in 3d printing.
AVIRAL SINGH: I’ve been doing this for about six to seven months. So, my knowledge is very limited. And I don’t know how much I’ll actually end up contributing, but there’s a couple of things which I am going to do. One of the beliefs that I have is that we need to make this much more natural and spread the knowledge. So as an example, before I started 3d printing, I realized I need to learn 3d design.
One of the things that I want to start doing and fairly soon actually is start holding classes for school students around 3d design and 3d printing, right. Prepare some light course material and handle that on a regular basis, get people excited about it. And the second objective that I have at this point is just talk to as many people as possible, because when you talk about jewellery, I talked to a couple of jewellers, right? I talked to dentists, in fact, one of the areas which blew my mind was hearing aids, right? The number of hearing aids, pretty much all of them are made because they need to be customized to the shape of your ear. Right? And I think just talking to people, gives people ideas as to what can be done and that’s something I intend to do. The one thing we really need to figure out is how we can do get 3d scanning to become more egalitarian and accessible. The moment we can do that, I think 3d printing will follow in suite.
MUDIT JAIN: And you touched a very good point of, because I know the person I was talking about, my friend who runs the startup, he also is the similar kind of guy who do not know design, but he’s having a full-fledged, 3D printers, army over them. We get designs, obviously custom designs have to be made, but we get so many ideas. So, we have to share some of the products or some of the articles feature of 3d printed, which are some cool stuffs just to know what we are printing.
So started with an idea to have this community or the 3d culture part. The name culture comes from that idea. Then, I was speaking to one of our associates to Vivek that 3d printing and its application. So, my idea was that I have a vacuum cleaner at home, and it’s an actual story. But the nozzles of that vacuum cleaner, they’re all gone. I do not have the nozzles, but I have a convenience, but can I 3d print the nozzles? Yes, I did. I just took the sizes and I have vacuum cleaner nozzles without spending money outside. And as, just to say that you handled one part and you have a part for your light fixtures, that’s the real 2d printing utility. And that has to come to the masses. I think we are not there yet.
We need to create that culture. And that’s like 3d printing has so much of a scope. And you talked about that printing place. So, removing or reducing the carbon footprint. So obviously like automobile workshops, and those are the requirement. That’s the real case we see quite often.
I have a last question for you and, quite a common one, like what do you advise to the people who are starting in 3d printing, each painting or designing, or just for hobbying, doing both or any one of them. And also, the small entrepreneurs who wants to start something into 3d printing.
AVIRAL SINGH: The advice I’m going to give is my learning. So, in my opinion, if you start it up as a business, you’re going to have issues. You’ve got to first start it up as a hobby, because if you don’t enjoy it, it’s going to be a problem. Because 3d printing is not really plug and play today. Even the most expensive machines, right? While they advertise plugin plan may be much more plug and play than the hobby machines. They’re still not plug and play. So, until you enjoy it and understand what actually goes into it and how it happens, if you’re not going to have fun, then you’re not going to progress very easily. So, the one thing is that don’t get disheartened. The market doesn’t exist today, but it will.
Five years from today. I think there’s going to be a huge market. So, riding that wave from the technology being there and no market to the technology, being there on a huge market is I think going to be incredible for anybody who really joins the bandwagon.
MUDIT JAIN: Absolutely. If you look at the technology, it’s nothing. I mean, I talk about how one of the person by whom I got introduced to 3d printing, and he talked about it, he started with saying that I am a robotic engineer and I wanted to build some parts for my robots, but I was not able to customize them. And that’s where he bought the 3d printers. Eventually, he reverse-engineered the 3d printer and he said that this is nothing but a robotic arm, which creates up a particle on a product layer by layer. So, he created his own 3d printer, and he’s now having a good business in India, manufacturing 3d printers. And he has been into the industry for the last eight years. And still, he says that the market is nothing today, just as it does now. So, we do believe the market is going to explode. And I am talking about the COVID. You also talked about the COVID days. There have been many talks about 3d printing. This guy actually created the face shields, 3d printed on order from multiple government organizations. And that was a great use case in this and a very quick turnaround. So definitely 3d printing is going to be a key player in manufacturing and service. And also, the bespoke industry, I think, as you were saying that the customization will be key. The Paralympics had 3d printed, which are products for the athletes. I think it’s going to be a great journey.
Thank you very much. And definitely, we’ll be talking much more in the future course and definitely looking forward to having your more talks and more products out into 3d. Thanks a lot for your time today. | https://medium.com/@3dculture/aviral-singh-reading-3d-printing-and-doing-3d-printing-are-two-different-things-874c8b9f201e | [] | 2021-12-25 08:37:51.735000+00:00 | ['3d Printing Technology', '3d Printing Industry', '3d Printer', '3d Printing Service', '3D Printing'] |
1,433 | Rico Nasty the Zoomer | Introduction
For better or worse, 100 gecs has emerged as one of the defining voices of Gen Z. Their post-ironic music is polarizing yet catchy. They take the absurdities of modern pop music — boastful lyrics and overproduced tracks — to the extreme. It feels like it should be satirical, like the Gen Z equivalent Lonely Island, but the duo actually revels in the absurdity rather than make fun of it.
This attitude is shared by rapper and self-proclaimed “pop-punk princess” Rico Nasty, which is why they have collaborated on several tracks. Rico Nasty has forged her own unique path to stardom. Although well-established now, she was an outcast in the rap world for a long time and it is her strong connection to other outcasts that has given her such a strong, almost cult following. It is also what has drawn her to 100 gecs and is why they work so well together.
This piece examines the music video for their most recent collaboration on Rico Nasty’s single “iPhone”, and how it perfectly encapsulates the post-ironic attitude of Gen Z.
Context
“iPhone” is one of several singles Rico Nasty has released in anticipation of her first studio album, which is scheduled for later this year. The song is produced by 100 gecs and they were a clear influence on Rico Nasty’s vocals. The music video for “iPhone” was released on August 13, 2020, and was created during the height of the first wave of Covid-19. The video was directed by Emil Nava, who has worked extensively with Calvin Harris, Ed Sheeran, and other big-name artists.
Analysis
“iPhone” is about the dysfunctional and often toxic realities of love in the digital world. She compares each new relationship to the newest social media site and points out how each one promises to be better than the last. She “can’t go back to [her] old ways,” in the sense that once there is a new app it’s impossible to go back to older sites like Myspace, which she references. This also refers to relationships since the flaws of past relationships are brought to the surface, making it clear that you can’t enter into a relationship like it ever again.
This idea of newer is better is why the collaboration with 100 gecs makes so much sense. They, more than any other current band or artist, embody newness. They represent the new generation and their addictive and post-ironic music makes it difficult to go back to the corny and repetitive pop music that preceded them. They are the newer, better version of the music industry, and through this collaboration, Rico Nasty reinvents herself within their new musical reality. In a way, she updates herself to the newer, better model.
All of this brings us to the music video for “iPhone”. The video shows a CGI version of Rico Nasty as she performs the song on different devices while a miniature, also CGI, Rico dances on a kitchen counter.
The video mirrors the layers of meaning behind the song. First, there are obvious references to the iPhone and social media in the shots of Rico Nasty on the phone screen. She visualizes the way communication now exists, through our phones, with holograms and her CGI avatar jumping in and out of the phone.
This depiction of communication brings us to the second layer: relationships. The video shows Rico’s avatar interacting with a male avatar. What’s noticeable about this section is that they meet through the phone and jump back into the phone to go on what can be assumed is a date (riding a rollercoaster). The whole relationship is completely mediated by the phone. This shows how the modern relationship is wholly dependent on technology and suggests that it would be impossible to go back to a way of dating that doesn’t use technology.
This video is about evolution and technology’s connection to that change. Whether consciously or not, technology has changed us on an individual level. Nowadays, our identity is formed equally by our online personality as it is by our real-life personality. The CGI avatar of Rico Nasty is a real part of her even though it only exists in her phone. In fact, the focus on the avatar implies that Rico Nasty believes this part of ourselves is becoming larger and may even be more important than our physical selves. This is corroborated by the “real” Rico Nasty in the video — she has a heavy filter on and looks more like CGI than a real person.
So, we have technology-focused lyrics, an overproduced beat by 100 Gecs, and a CGI-driven music video. All of this, along with the theme of change and growth, seems to point to the theory that we are headed down the path towards fully technologically integrated self-identity.
“He said, “I think my phone is hacked”
I think my phone is tapped
I think you’ve got me blocked
Why won’t you call me back?” — Rico Nasty, “iPhone”
The song makes it clear that Rico Nasty recognizes the dangers of going down this path. Much of the song discusses the toxicity of her relationships with others and with her phone, clearly laying out legitimate issues with the reality she lays out. These issues include paranoia, intrusive surveillance, and more. The music video doesn’t paint a rosy picture either. There are harsh strobe lights, allusions to the horror movie The Ring, and even a quick glimpse of the Devil on her phone.
This is where the collaboration with 100 gecs comes into play. Rico Nasty is aware of the dangers the reliance on technology brings. Just like 100 gecs recognizes and points out the flaws of the music industry yet enjoys them rather than critique them, Rico Nasty similarly identifies the issues without criticizing them. Instead, she makes her peace with them and goes along with it. She still uses the technology to create the song and video, and her avatar continues to dance and have a good time despite the disturbing atmosphere around it.
Conclusion
With “iPhone” Rico Nasty takes away technology’s power by revealing its dark side but continues to take advantage of it by making art and enjoying herself. This is what truly defines Gen Z; they are fully aware of how messed up the world is (due to the failures of older generations), and though they make their voices heard about these issues, they still take full advantage of the positives that accompany the negatives. It’s what makes them post-ironic, as perfectly shown through 100 gecs’ relationship with the music industry.
Although at 23 years old Rico Nasty is on the cusp of both generations, this post-ironic attitude firmly solidifies her as a member of Gen Z. | https://colinhodgson.medium.com/rico-nasty-the-zoomer-b90ad06df377 | ['Colin Hodgson'] | 2020-11-05 21:11:06.826000+00:00 | ['Technology', 'Gen Z', 'Music', 'Rico Nasty', 'Music Video'] |
1,434 | FastFormers: 233x Faster Transformers inference on CPU | Since the birth of BERT followed by that of Transformers have dominated NLP in nearly every language-related tasks whether it is Question-Answering, Sentiment Analysis, Text classification or Text Generation. Transformers enjoys much better accuracy on all these tasks unlike RNN and LSTM the problem of vanishing gradients, which hampers learning of long data sequences. Also unlike Transformers; RNN and LSTM are not scalable as they have to take into account the output of the previous neuron.
Now the main problem with Transformers is they are highly computed intensive in both training and inference. While the training part can be solved by using pretrained language models (Open-Sourced by Large Corporates like Google, Facebook and OpenAI 😏) and fine-tuning them on our dataset. Now the latter problem is addressed by FastFormers, a set of recipes to achieve efficient inference-time performance for transformer-based models on various NLU tasks.
“Applying these proposed recipes to the SuperGLUE benchmark, authors were able to achieve from 9.8x up to 233.9x speed-up compared to out-of-the-box models on CPU. On GPU, we also achieve up to 12.4x speed-up with the presented methods.” - FastFormers
The paper FastFormers: Highly Efficient Transformer Models for Natural Language Understanding mainly focuses on providing highly efficient inference for Transformer models which enables deployment in large scale production scenarios. The authors specifically focus on inference time efficiency since it mostly dominates the cost of production deployment. In this blog, we gonna walk through all the problems and challenges this paper addresses.
So how did they address the problem of high inefficient inference time of the Transformers?
They mainly utilize three methods i.e. Knowledge Distillation, Structured Pruning and Model Quantization.
The first step is Knowledge Distillation which deals with reducing the size of model wrt depth and hidden states without compromising with accuracy.
which deals with reducing the size of model wrt depth and hidden states without compromising with accuracy. Second, Structured Pruning that reduces the size of the models by reducing the number of self-attention heads while trying to preserve the accuracy as well.
that reduces the size of the models by reducing the number of self-attention heads while trying to preserve the accuracy as well. Finally, Model Quantization which enables faster model executions by optimally utilizing hardware acceleration capabilities. On CPU, 8-bit integer quantization method is applied while on GPU, all the model parameters are converted into 16-bit floating-point data type to maximally utilize efficient Tensor Cores.
In-depth Walkthrough
Knowledge Distillation: Knowledge distillation refers to the idea of model compression by teaching a smaller network, step by step, exactly what to do using a bigger already-trained network. While large models have higher knowledge capacity than small models, this capacity might not be fully utilized. It can be computationally just as expensive to evaluate a model even if it utilizes little of its knowledge capacity. Knowledge distillation transfers knowledge from a large model to a smaller model without loss of validity. As smaller models are less expensive to evaluate, they can be deployed on less powerful hardware like a smartphone.
Knowledge distillation methods: Two approaches are particularly used namely task-specific and task-agnostic distillation.
In the taskspecific distillation, authors distill fine-tuned teacher models into smaller student architectures following the procedure proposed by TinyBERT.In the task-agnostic distillation approach, authors directly apply fine-tuning on general distilled models to tune for a specific task.
Summary of the workflow for knowledge distillation. Courtesy: Floydhub
Knowledge distillation results: In the experiments, authors have observed that distilled models do not work well when distilled to a different model type. Therefore, authors restricted our setup to avoid distilling Roberta model to BERT or vice versa. The results of knowledge distillation on the tasks are summarized with the teacher models on validation dataset in the below table. (Student referred to Distilled Model)
Accuracy of teacher and student models on the validation data set for each task of SuperGLUE benchmark with knowledge distillation. Courtesy: FastFormers
Neural Network Pruning: Neural network pruning is a method of compression that involves removing weights from a trained model. In agriculture, pruning is cutting off unnecessary branches or stems of a plant. In machine learning, pruning is removing unnecessary neurons or weights. Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving the computational performance of inference without compromising accuracy. This helps in decreasing the size or energy consumption of the trained neural network and helps to make inference more efficient. Pruning makes the network more efficient and lighter.
Synapses and neurons before and after pruning. Courtesy: Link
Structured pruning methods: The first step of our structured pruning method is to identify the least important heads in Multi-Head Attention and the least important hidden states in the feed-forward layers.
First-order method for computing the importance score, which utilizes the first-order gradient information instead of using magnitude-based pruning.
Before doing the importance score computation, the authors add a mask variable to each attention head for the gradient computation of the heads. Then the authors run forward and backward passes of the model on the entire validation data set, then the absolute values of the gradients are accumulated. These accumulated values are then used as importance scores which we use to sort the importance of the heads and the intermediate hidden states.
Based on the target model size, authors select a given number of top heads and top hidden states from the network. Once the sorting and selection steps are done, authors re-group and reconnect the remaining heads and hidden states which result in a smaller sized model. When heads and hidden states are pruned, authors use the same pruning ratio across different layers. This enables further optimizations to work seamlessly with the pruned models.
In the experiments, authors found out that the pruned model can get better accuracy when it goes through another round of knowledge distillation. So knowledge Distillation is again applied to the model.
Model Quantization: Quantization refers to techniques for performing computations and storing tensors at lower bit widths than floating-point precision. A quantized model executes some or all of the operations on tensors with integers rather than floating-point values. This allows for a more compact model representation and the use of high performance vectorized operations on many hardware platforms.
8-bit quantized matrix multiplications on the CPU:8-bit quantized matrix multiplication brings a significant amount of speed-up compared to 32-bit floating-point arithmetic, thanks to the reduced number of CPU instructions.
16-bit model conversion for the GPU: V100 GPU supports full 16-bit operations for the Transformer architecture. Also, 16-bit floating-point operations do not require special handling of inputs and outputs except for having smaller value ranges. This 16-bit model conversion brings quite significant speed gain since the Transformer models are memory bandwidth bound workload. About 3.53x speed-up depending on the model settings was observed.
On top of the structural and numerical optimizations applied, authors also utilize various ways to further optimize the computations especially Multi-processing optimization and Computational graph optimizations.
Combined results
The table below speaks all how effective the results are given below | https://medium.com/ai-in-plain-english/fastformers-233x-faster-transformers-inference-on-cpu-4c0b7a720e1 | ['Parth Chokhra'] | 2020-11-04 17:34:08.505000+00:00 | ['Data Science', 'Technology', 'AI', 'Machine Learning', 'Deep Learning'] |
1,435 | It’s time we involve citizens in the AI revolution | It’s time we involve citizens in the AI revolution
With intelligent machines increasingly playing a role in our daily lives, the public has a right to be informed of the social implications of new technologies. Vincent (Vince) J. Straub Jan 14, 2020·4 min read
This article was originally published by SAGE Ocean
As the ongoing revolution in robotics and artificial intelligence (AI) disrupts society, it has reignited debate about which principles should guide the development of new technologies, and how we should use them. But although topics like automation and algorithmic bias are now under the public spotlight, there is not enough focus on ensuring that citizens understand how intelligent machines could shape us, and even change what it means to be human altogether. This only risks getting worse if we continue to let industry and academia steer the debate and leave out the public in assessing the social implications of new technology. In response, some are pushing for more transparency in AI research, but that’s not the only measure we should be taking.
The field of AI aims to develop computer systems that can perform tasks we normally associate with human thinking. For example, a program that translates text from one language to another, or a model that identifies diseases from radiologic images; both can be viewed to ‘possess’ some form of artificial intelligence. Robots are often the containers for these systems (AI is the brain and the robot is its body if you will).
The pace at which these technologies are transforming our economy and everyday lives is impressive. But often we don’t stop to ask how these systems actually work; in some cases, they still depend on a largely invisible (often female) data labeling workforce.
More alarmingly, we give little thought to the social consequences of adopting such technologies. Previous technological innovations, like steam power and electricity, have modified the way we live, of course. But so far they have not fundamentally altered what makes us human and what differentiates us from machines — our capacity for love or, more generally, connection, friendship, and empathy. In the age of intelligent machines, this could change.
Now that AI systems are mastering the ability to personalize individual experiences, and with ‘emotional’ companion robots learning to recognize human feelings, our need for human-to-human social interaction may be reduced.
Yet, in times of political polarization, it is exactly such interaction that is crucial for fostering love, mutual understanding, and building a cohesive society. As Kai-Fu Lee, the acclaimed AI scientist has pointed out, for all of AI’s promise, ‘the one thing that only humans can provide turns out to be exactly what is most needed in our lives: love’.
A new public-private initiative to involve citizens in understanding the social implications of AI could unite society under the banner of safeguarding core human values whilst improving AI literacy. But, what would this look like in practice? To begin, the government could partner with tech to develop an educational curriculum that teaches the technical basics and social implications of AI to all citizens. At the same time, public and private funders of AI research could adopt an agenda that views AI not just as a technological but a social challenge. Both approaches would ensure we develop a stronger grasp of the upsides and potential pitfalls of using new technologies like AI.
This may sound costly and far-fetched, but there are examples that show it is possible. Last year saw the launch of Elements of AI in Finland, a first-of-its-kind online course, accessible to all, that teaches some of the core technical aspects and social implications of AI. Since being developed by the publicly funded University of Helsinki and tech company Reaktor, over 130,000 people have signed up to take the course.
The UK has also begun to make headway in this area. The Royal Society, for example, last year launched a ‘You and AI’ public debate series to build a greater understanding of the ways AI affects our lives, and how it may affect them in the future. Similarly, the RSA brought together a citizens’ jury to deliberate the ethical uses of AI, and earlier this year, innovation foundation Nesta showed how government support and public funding could be used to advance the use of AI tools in schools and colleges. At the University of Oxford, the announcement of a new Institute for Ethics in AI also means students from the arts and humanities will soon be able to study the social implications of AI (although the way this initiative is being funded has itself drawn significant criticism).
But these are still just small drops in the ocean when compared to the funding flowing into developing better AI technology. Regardless of the form any initiative to understand the social implications of AI takes, what matters now above all is that we get the issue on center stage in the AI debate.
Half a century ago, when AI and robots were still largely the purview of science fiction, the consequences for society were small. Now that both increasingly play a role in our daily lives, every citizen has a stake in the matter. At the start of a new decade, it’s time we demand policymakers think about how the AI revolution can not only grow our economy but strengthen our social bonds and consolidate our democracy. | https://medium.com/@vincejstraub/its-time-we-involve-citizens-in-the-ai-revolution-e482f457449 | ['Vincent', 'Vince', 'J. Straub'] | 2020-05-10 16:05:42.147000+00:00 | ['Artificial Intelligence', 'AI', 'Robots', 'Technology', 'Social'] |
1,436 | 5 Things To Know To Advance Python Programming | 5 Things To Know To Advance Python Programming
Python is Interpreted, Interesting, Most Popular and Fun to learn New Generation Language
Python is a buzzword. Being most popular modern generation language with one of the biggest community of developers and capable of performing machine learning makes Python unique and should learn programming language. As per definition, it looks like the perfect programming language with all the ingredients. It has different libraries for covering different use cases, UI frameworks like flask, Django etc, machine learning libraries and visualization libraries.
1. Virtual Env setup
Virtual environment setup is very important to work on projects in python. It helps to distinguish and maintain different projects’ libraries, versions used for the given problem statement. It could be possible one of your project is running in different version of python, different version of python libraries. All can be achieved by simple using virtual env. Maintaining virtual environment is very simple and easy. It provides flexibility, portability and easy maintainability. Let us start creating virtual environment
# to install virtual env setup
pip install virtualenv
python -m venv <<envname>>
source <env>/bin/activate
After that, anything installed will get installed in activated virtual env. Jupyter notebook is also another best thing which allows to run python programs and see the output. Try installing that and see the magic!
2. Iterators, Generators & Decorators
Iterators: As a name suggest, something which allows to loop through and extract / manipulate values.
__next__ : returns the next item
__iter__ : returns the iterator
text = iter('Learning')
print(next(text))
print(next(text))
# output is first two characters
Generators: With the usage of yield keyword, function will return the value in terms of iterator or as and when required. It helps to boost the performance, return the result asap and return from the function could be sequence of elements. This is nice feature and can be exploited based on requirements
def function_name():
valueString = 'Learning'
for chars in valueString:
yield(chars)
retString = function_name()
print(next(retString))
print(next(retString))
#above two prints will return first two characters
Decorators: As a name suggests, they provide good information about the function and makes it easy to understand. These decorators can be parameterized and could be very useful in different use cases.
@staticmethod: defines function as static
@classmethod: defines method as class method
3. Important libraries and Use
Python has the biggest developer community and it means there is progressive development every minute or second. It means there are many libraries existing which can be utilized to suit your requirement. In simple, we can say there is no need to invent the wheel rather than utilize the existing resources.
os : It allows to perform generic statements related to operating system. Like, getting the current directory, changing the directory etc.
glob : This library allows to pick all the files using pattern and process the same. Very useful while implementing multiple dataset from different files.
numpy : As a name suggests, useful library for numeric calculations. It allows every kind of calculation in terms of matrices and multi dimensional arrays.
pandas : One of the most used libraries to read data from different format files like csv, tsv, txt, xls etc. It helps to convert read files in data frames. And, dataframes are in tabular format which can be joined and easily manipulative.
scikit-learn : Must learn library to begin your machine learning journey. It contains all the ingredients to train and evaluate the model
matplotlib : One of the most used library and must to learn to visualize data in different graphs. To surprise, one single command is capable of turning data into different graphs.
tensorflow / pytorch : This is machine learning libraries and could be very useful in implementing neural networks. These are fast, advanced and scalable to handle huge amount of calculations in quick manner.
4. Testing and Coverage
Testing is very important aspect of every programming language. Python has inbuilt libraries like unittest and thirdparty libraries like pytest and nose. Unittest library is more inspired by Junit and it gives almost same feel.
import unittest class Test_Example(unittest.TestCase):
def setUp(self):
#default function to define initial variable for all test cases
def test_case1(self):
self.assertEqual(call_test_code, exp_result) def test_case2(self):
self.assertEqual(call_test_code, exp_result) if __name__ == '__main__':
unittest.main()
As we see, we can define common variables in setUp and then write other functions starting with test keyword to test different scenariors.
python -m unittest test_File #it will run all the test cases in file
Alog-with the above options, you have various options to skip test cases conditionally and it allows to skip certain code so that coverage to the code can be increased. And, it has other options to calculate coverage for the code.
# below option is kept in front of function or code which need not be taken care in coverage
# pragma: no cover
5. Packaging
Once you have written a program or an utility, it is very easy in python to package it. Packaging your solution allows it to be installed by other as other existing packages like pandas, numpy etc. And, then your things can be utilized in a very simple manner using import statement. Sample setup.py for packaging your code. You can get more guidance in official documentation. It provides lots of options and flexibility. The more you use, you start feeling comfortable with this.
import setuptools
from setuptools import setup
setup(
name= 'ExampleCode',
version = '1.0',
description = 'Description of Your Package',
url = "github repository link",
author="Laxman Singh",
author_email="Mail id",
long_description_content_type="readme.md",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: Not Applicable",
"Operating System :: OS Independent",
],
python_requires='>=3.7',
install_requires = ['sklearn','numpy'],
)
Packaging allows to have the flexibility of keeping all the dependencies for your program to be installed. It will make your solution robust and easy to implement by other programmers. This is very diverse in python and simplicity of packaging gives amazing feeling. It is an enjoyment for every programmer to implement their code with such a ease. A must try for every programmers.
Conclusions
Python is open source and highly utilized modern generation programming language. Alongwith the above features, python allows to have parallel programming, use of regular expressions and multi tasking. It also has some lightweight UI frameworks like flask which is very easy to implement and create easy to use API framework.
Overall, Python is inerpreted, high level and advanced new generation programming languages which is growing very fast. It is must learn programming language for all and we should let this come as unique gift to our knowledge area. I hope above things will help you to advance in your programming journey. Happy Coding! | https://medium.com/nerd-for-tech/5-things-to-know-to-master-python-902f431fab8e | ['Laxman Singh'] | 2021-02-21 06:57:24.838000+00:00 | ['Python', 'Learning', 'Python3', 'Technology', 'Programming'] |
1,437 | How To Stay Relevant As A Machine Learning Engineer In 2021 | Have A Consistent Learning Routine
Success isn’t always about greatness. It’s about consistency. Consistent hard work leads to success. Greatness will come — Dwayne Johnson
One of the main lessons I learned in 2020 is that nothing beats consistency, not talent and not even luck.
Writing over 150+ Medium articles in 2020, and publishing at least 3 AI/ML/DS articles a week over the year made me realise that if you stay consistent, it is possible to accumulate invaluable experience that produces results that can’t be replicated through luck.
In past articles, I’ve alluded to some of the reasons I write on Medium and also the benefits realised as a Machine Learning Practitioner. Through the creation of a regular writing and learning routine, I’ve been able to deliver content on the Medium platform continuously.
This brings us to the first strategy I’ll be incorporating in 2021 to stay relevant within the ML industry. My plan for staying relevant is to transfer the notion of consistency and regime to my accumulation of ML related knowledge.
As a Machine Learning professional, you come across novel algorithms, models and techniques every day.
Most times, to leverage these new algorithms and techniques, you are forced to learn how they operate and function. So, you could argue that ML practitioners are continually learning. However, in 2021, I am making it a habit.
One notable method which I’m using to educate myself and enrich my knowledge in machine learning is to set a measurable goal.
The goal is to read and understand at least 30 deep learning research papers in 2021.
For me, this means reading and understanding the content of a new research paper every two weeks. My approach is to spend at least an hour a day on a research paper. I’m not sure if reading 30 research paper is considered a lot or not, especially for someone more concerned with the engineering of deep learning models as opposed to the study or research.
What I’m sure of is the papers I aim to read are centred around deep learning solutions to computer vision tasks, such as Object Detection, Pose Estimation, Semantic segmentation and more.
Staying Relevant
How will reading research papers help me stay relevant within the ML industry?
In 2020 most of us found ourselves with more time in our hands, typically as a result of the lockdown measures imposed in major cities around the globe. With my time, I explored some of the earliest major convolutional neural networks released by pioneers within the deep learning field.
My exploration of these CNN architectures involved reading research papers, understanding the algorithms and techniques presented and in some cases implementing the CNN architectures using TensorFlow.
Some of the architectures I explored were AlexNet, LeNet, GoogLeNet, PoseNet etc. Transfer learning within machine learning removes the complexity involved in implementing, training and developing conventional CNN architectures.
To gain a deeper understanding of deep learning network architectures, it’s beneficial to visit original research papers and make an attempt to understand the reasoning behind techniques.
Going as far as to implement algorithms and network architecture from scratch will provide any ML practitioner with a more profound understanding of the deep learning domain.
ML practitioners that can read research papers and extract the necessary information to develop an algorithm or neural network architecture are highly sought after across the ML industry.
And this isn't going to change in 2021. | https://towardsdatascience.com/how-to-stay-relevant-as-a-machine-learning-engineer-in-2021-41b5feaa4771 | ['Richmond Alake'] | 2020-12-10 01:04:43.814000+00:00 | ['Machine Learning', 'Data Science', 'Technology', 'Careers', 'Artificial Intelligence'] |
1,438 | How to Sample Data (With Code) | Sampling Our Data
We now need to take a sample of these data for our testing or development process.
How do we know how big our sample should be to represent our population accurately?
This is where we can use Cochran’s formulae:
Using the formula, we can calculate the sample size required to fulfill our required margin of error and confidence level. If we are looking at a specific attribute within a dataset, we can also specify the estimated proportion of our population to contain this attribute — although this is not a requirement.
So, we have a dataset containing 100,000 records. How many of these should we include in our sample?
Let’s say we would like to be 95% confident in our sample representing the population correctly. We can also accept a margin of error of ±5%.
population size: 100K - N = 100000
confidence level: 95% - cl = 0.95
margin of error: 5% - e = 0.05
target proportion: 50% - p = 0.5
First, we have all of the parameters needed to calculate n₀ except Z (the z-score). To find the z-score, we half our confidence level from 0.95 to 0.475 , then look-up this value in a z-score table, which gives us 1.96 .
We find our halved confidence level 0.475 in the table. The z-score is given by the values on the axes, 1.9 + 0.06, giving us 1.96. Find the table here.
Alternatively, we can calculate this using Python, R, or Excel.
Python
-----
import scipy.stats as st
st.norm.ppf(1-(1-0.95)/2)
[Out]: 1.959964
R
-----
qnorm(1-(1-0.95)/2)
[1]: 1.959964
Excel
-----
=NORMSINV(1-(1-0.95)/2)
1.959964
Note that in every code, we perform the same 1-(1-0.95)/2 calculation. This is because every implementation calculates the z-score for a left-tailed single distribution, which looks like this:
Instead, what we want is a center-aligned area, which looks like this:
Now that we have all of the required values, we calculate n₀:
To satisfy our sampling needs, we should round this up to 385 records. We round-up as this is our threshold — anything below 384.15 will have a less than 95% confidence level, or greater than a 5% margin of error.
To perform this calculation in code, we do the following:
Python
-----
z**2 * p * (1 - p) / e**2
[Out]: 384.15
R
-----
z^2 * p * (1 - p) / e^2
[1]: 384.15
Excel (replace each letter with the respective cell)
-----
=z^2*p*(1-p)/e^2
384.15
The second formula is what we use for small datasets; it adjusts our required sample size based on the population size N.
Smaller datasets will reduce the number of required samples, whereas larger datasets do not.
Using our population size N = 100,000 causes a slight reduction in our required sample size:
Giving us a final required sample size of 383 records. Performing this same calculation again in code, we have:
Python
-----
n_0 / (1 + (n_0 - 1) / N)
[Out]: 382.68
R
-----
n_0 / (1 + (n_0 - 1) / N)
[1]: 382.68
Excel (replace each letter with the respective cell)
-----
=n_0/(1+(n_0-1)/N)
382.68
With that, we have an easy, replicable method for calculating sample sizes. If we take it a little further, we can implement the full process in code to make our future sampling much easier.
In Python, we can build a simple sample size calculator like so:
Using our same parameters from earlier examples, we calculate the sample size with ease:
sample(100000, 0.95, 0.05, 0.5)
[Out]: 383
We can see how the population size affects our smaller samples calculation too:
sample(500, 0.95, 0.05, 0.5)
[Out]: 218
sample(20, 0.95, 0.05, 0.5)
[Out]: 20
sample(1e8, 0.95, 0.05, 0.5)
[Out]: 385
Of course, we do the same in both R and Excel by threading together the respective codes we previously discussed. | https://towardsdatascience.com/how-to-sample-data-with-code-327359dce10b | ['James Briggs'] | 2020-06-12 15:59:17.632000+00:00 | ['Technology', 'Python', 'Data Science', 'Programming', 'Towards Data Science'] |
1,439 | Listening To The World | This feature story is a personal retrospect of the extraordinary career and life of Dr. Christopher W. Clark, one of the world’s leading marine acoustics experts. He is the Chief Marine Scientist at Planet OS with 30 years of experience in the field.
In 1992, the first time I tried to listen to a singing blue whale in very deep water all I heard was a giant hum. I was convinced that my recording equipment was broken. I checked all the cable connections, all the dial settings; everything was working. I knew some singers were out there because I could see their voiceprints on the Navy displays. And then it hit me — of course I couldn’t hear the whales because my ears and mind were not tuned to perceive the pitch or rhythms of their songs. The pitch was below my threshold of hearing, a single note lasted 20 seconds, and a single phrase lasted almost 2 minutes! But I knew how to solve this problem. I had to play back the recording at a much higher speed, and when I did — voila, I heard the whales singing!
And to my joyful surprise it was not just one singer, but an entire chorus, and as I listened longer it was not just one chorus, but the entire ocean!
I grew up on Bound Brook Island in the town of Wellfleet on Cape Cod, Massachusetts. We were surrounded by pristine wilderness. Our home and those of my grandparents were always filled with music, poetry and literature books. I happened to have a very good ear and a cherubic voice, so when I was nine years old, my mother drove me to New York City to audition for the boy’s choir at the Cathedral of Saint John the Divine.
I passed my audition, was accepted with a scholarship, and between ages 9 and 13, I attended the cathedral’s choir school as a boy soprano. Alec Wyton, a brilliant musician and phenomenal teacher from Kings College, England, was our Master of Choristers. We had choir practice and sang in the cathedral twice a day six days a week.
At the cathedral choir school I was trained to think in sound. I learned how to read music; transcribing its oddly but methodically shaped symbols of time, pitch, and intensity into song. I learned how to contribute my voice into the complexities of a choral arrangement in which mine was but one of 40 soprano voices inside an arrangement that included 30 alto, tenor, and bass voices. And from that I learned how to invert the process; how to dissect complex musical scenes into component parts. I was taught and learned an entirely new “language” — the language of music, song and sound. All of this permeated not just the way I listened, but also the way I sensed the world and my existence. In a very abstract way, and similar to the way we each have a “mind’s eye”, I acquired a “mind’s ear”. It was an absolutely amazing experience that has served as a foundation throughout my life.
As I grew up and up it never occurred to me that others had not acquired the same skills or did not possess the same level of auditory sensitivity. I saw in sound. Maybe I was predisposed to this, I don’t know, but it’s a core part of who I am. Just as there are some who can do mathematical metaphorical somersaults, I do the same — only in the universe of sound. | https://medium.com/planet-os/listening-to-the-world-b2f44d12cf61 | ['Planet Os'] | 2017-03-09 16:59:32.639000+00:00 | ['Big Data', 'Life', 'Oceans', 'Environment', 'Technology'] |
1,440 | Implementing gRPC Server Side Streaming With Go | Implementation
Photo by Cam Adams on Unsplash
It’s implementation time! If you are reading this section, I assumed you already know about this 3 things:
gRPC
Server Side Streaming
Go
Server side streaming is especially useful if your server must return a bulky payload. By using streaming, you can split up those responses and return them one by one, and client will be able to cut off unused responses if the client already has sufficient response or when it has been waiting too long for some responses.
Okay now let’s jump straight in to the code!
Proto File
For starter we will need to define our protobuf file to be used by client and server side, let’s just make a simple one here.
This proto file basically contains a single function call with parameter Request and returns a stream of Response .
Before we proceed, we will also need to generate our pb file to be used by our Go Program. Each programming language will have different way to generate the protocol buffer file, for Go we will be using protoc library.
If you haven’t installed it yet, Google provided the installation guide for that.
Let’s generate the protocol buffer file by running:
protoc --go_out=plugins=grpc:. *.proto
And we will have data.pb.go ready to be used.
Client File
For the next step, you can either make the client or server file in any order.
But in this example i will be making client file first.
Client will basically be the one sending request and will be the one that will receive multiple responses.
The client will call the gRPC method FetchResponse and wait for all the responses. I am using goroutine here to show the possibility of the concurrency here and using channel in order to wait until all the processes are finished before exiting the program.
Server File
For the third and final file, we will be making the server file. This file will receive the response from client and in turn sending a stream of responses into client.
In the server file, i am using goroutine also to simulate concurrent process.
For each of the request, i will be streaming 5 requests back to the client side. Each also with different process time in order to simulate different processing time.
Output
Now comes the exciting part, let’s build both of our client and server file with go build to get our binary file and open up two separate console command to run it.
You should turn on the server first before the client since the client will directly invoke the server method.
So let’s go inside the directory of each of your binary files and run both of them with /.server and ./client .
Your client will output this:
2020/11/10 22:26:11 Resp received: Request #0 For Id:1
2020/11/10 22:26:12 Resp received: Request #1 For Id:1
2020/11/10 22:26:13 Resp received: Request #2 For Id:1
2020/11/10 22:26:14 Resp received: Request #3 For Id:1
2020/11/10 22:26:15 Resp received: Request #4 For Id:1
2020/11/10 22:26:15 finished
And the server will output this:
2020/11/10 22:26:09 start server
2020/11/10 22:26:11 fetch response for id : 1
2020/11/10 22:26:11 finishing request number : 0
2020/11/10 22:26:12 finishing request number : 1
2020/11/10 22:26:13 finishing request number : 2
2020/11/10 22:26:14 finishing request number : 3
2020/11/10 22:26:15 finishing request number : 4
If all is well, you have successfully built yourself a grpc server side streaming with Go! If you need the github code for the whole example, i do provide it here. | https://medium.com/swlh/grpc-server-side-streaming-with-go-80fe44987663 | ['Pramono Winata'] | 2020-12-28 17:33:31.690000+00:00 | ['Technology', 'Grpc', 'Go', 'Programming', 'Tech'] |
1,441 | Exciting Robotics Opportunities for Amazing Young Professionals | When people think of the future, quantum computing, AI, VR, and robots are fast to come to mind. Advanced robotics are already being used in state of the art warehouses around the world and AI programs are already being used for logistics and JIT supply chain management.
Students today, from STEM related fields to MBAs, are learning about and benefiting from robotics. Getting into robotics in any capacity has the ability to revolutionize the future and your bottom line.
Here are a couple of the biggest companies using and researching robotics: | https://medium.com/@jimgears/exciting-robotics-opportunities-for-amazing-young-professionals-f52caf91947d | [] | 2020-12-22 16:49:02.212000+00:00 | ['Robotics', 'Young Professionals', 'Robotics Technology', 'Robotics Automation', 'Business'] |
1,442 | Google Play Music Is Dead. Long Live Spotify And Apple Music | Google Play Music Is Dead. Long Live Spotify And Apple Music
As another product bites the dust, Google now stands at the risk of killing its own music business
Images from Google, altered
We knew this was coming. Google had been planning to draw curtains over their decade-old music and podcast streaming service for a while now.
Now when the time has finally arrived, this all feels so sudden. However, it isn’t too surprising. The search engine giant has such a long history of discontinuing products that there’s a whole graveyard named after them.
Yet, seeing the doom of Google Play Music really hurts. After all, only last year, it was the default music player app shipped across millions of Android devices.
Today as Google begins forcing users to switch over to the newer YouTube Music app, it's hard to digest the fact that the much-loved Google Music product is officially dead.
Now, this would have been fine if YouTube Music was an equal substitution for Google Play Music. But the thing is, currently, YT Music doesn’t fill that void. Instead, it's a service that primarily focuses on boosting video viewing time rather than music playlists.
This makes Google’s whole strategy of replacing a working product by shoehorning it into another one a very questionable decision. It won’t be an overstretch to say that Google’s current move could inadvertently benefit its competitors even more. | https://medium.com/big-tech/google-play-music-is-dead-long-live-spotify-and-apple-music-b298228225fc | ['Anupam Chugh'] | 2020-10-31 18:43:26.082000+00:00 | ['Business', 'Marketing', 'Google', 'Technology', 'Social Media'] |
1,443 | How to make algorithms fairer | Written by Tom Douglas, Senior Research Fellow at the University of Oxford
Fixing algorithms may not be the best response to bias. Ethicist Tom Douglas offers a more radical approach to creating fairness, that aims for ‘substantive’ rather than ‘procedural’ fairness outside of design.
Our lives are increasingly affected by algorithms. People may be denied loans, jobs, insurance policies, or even parole on the basis of risk scores that they produce.
Yet algorithms are notoriously prone to biases. For example, algorithms used to assess the risk of criminal recidivism often have higher error rates in minority ethic groups. As ProPublica found, the COMPAS algorithm — widely used to predict re-offending in the US criminal justice system — had a higher false positive rate in black than in white people; black people were more likely to be wrongly predicted to re-offend.
Findings such as these have led some to claim that algorithms are unfair or discriminatory. In response, AI researchers have sought to produce algorithms that avoid, or at least minimise, unfairness, for example, by equalising false positive rates across racial groups. Recently, an MIT group reported that they had developed a new technique for taking bias out of algorithms without compromising accuracy. But is fixing algorithms the best way to combat unfairness?
It depends on what kind of fairness we’re after. Moral and political philosophers often contrast two types of fairness: procedural and substantive. A policy, procedure, or course of action, is procedurally fair when it is fair independently of the outcomes it causes. A football referee’s decision may be fair, regardless of how it affects the game’s outcome, simply because the decision was made on the basis of an impartial application of the rules. Or a parent’s treatment of his two children may be fair because it manifests no partiality or favouritism, even if it has the result that one child’s life goes much better than the other’s.
By contrast, something that is substantively fair produces fair outcomes. Suppose a football referee awards a soft penalty to a team that is 1–0 down because she thinks the other team’s lead was the result of pure luck. As a result, the game finishes in a 1–1 draw. This decision seems procedurally unfair — the referee applies the rules less stringently to one team than the other. But if a draw reflects the relative performance of the two teams, it may be substantively fair.
Alternatively, imagine that a mother and father favour different children. Each parent treats the disfavoured child unfairly, in a procedural sense. But if the end result is that the two children receive equal love, then their actions may be substantively fair.
Ai researchers concerned about fairness have, for the most part, been focused on developing algorithms that are procedurally fair — fair by virtue of the features of the algorithms themselves, not the effects of their deployment. But what if it’s substantive fairness that really matters?
There is usually a tension between procedural fairness and accuracy — attempts to achieve the most commonly advocated forms of procedural fairness increase the algorithm’s overall error rate. Take the COMPAS algorithm for example. If we equalised the false positive rates between black and white people by ignoring the predictors of recidivism that tended to be disproportionately possessed by black people, the likely result would be a loss in overall accuracy, with more people wrongly predicted to re-offend, or not re-offend.
We could avoid these difficulties if we focused on substantive rather than procedural fairness and simply designed algorithms to maximise accuracy, while simultaneously blocking or compensating for any substantively unfair effects that these algorithms might have.
For example, instead of trying to ensure that crime prediction errors affect different racial groups equally — a goal that may in any case be unattainable — we could instead ensure that these algorithms are not used in ways that disadvantage those at high risk. We could offer people deemed “high risk” rehabilitative treatments rather than, say, subjecting them to further incarceration.
Alternatively, we could take steps to offset an algorithm’s tendency to assign higher risk to some groups than others — offering risk-lowering rehabilitation programmes preferentially to black people, for instance.
Aiming for substantive fairness outside of the algorithm’s design would leave algorithm designers free to focus on maximising accuracy, with fairness left to state regulators, with expert and democratic input. This approach has been successful in other areas. In medicine, for instance, doctors focus on promoting the well-being of their patients while health funders and policymakers promote the fair allocation of healthcare resources across patients.
Of course, most of us would be reluctant to give up on procedural fairness entirely. If a referee penalises every minor infringement by one team, while letting another get away with major fouls, we’d think something had gone wrong — even if the right team wins. If a judge ignores everything a defendant says and listens attentively to the plaintiff, we’d think this was unfair, even if the defendant is a jet-setting billionaire who would, even if found guilty, be far better off than a more deserving plaintiff.
We do care about procedural fairness. Yet substantive fairness often matters more — at least, many of us have intuitions that seem to be consistent with this. Some of us think that presidents and monarchs should have the discretion to offer pardons to convicted offenders, even though this applies legal rules inconsistently — letting some, but not others, off the hook. Why think this is justified? Perhaps because pardons help to ensure substantive fairness where procedurally fair processes result in unfairly harsh consequences.
Many of us also think that affirmative action is justified, even when it looks, on the face of it, to be procedurally unfair, since it gives some groups greater consideration than others. Perhaps we tolerate this unfairness because, through mitigating the effects of past oppression, affirmative action tends to promote substantive fairness.
If substantive fairness generally matters more than procedural fairness, countering biased algorithms through changes to algorithmic design may not be the best path to fairness after all.
Tom Douglas is a senior research fellow at the University of Oxford. This article is republished from The Conversation under a creative commons licence.
More thought leadership | https://medium.com/digital-leaders-uk/how-to-make-algorithms-fairer-d6236f0e9b04 | ['Digital Leaders'] | 2019-03-14 10:57:46.046000+00:00 | ['Tech', 'Bias', 'Technology', 'Algorithms'] |
1,444 | Microservices, Data Meshes, Micro Frontends and the Timeless Principles of Decentralization | Microservices, Data Meshes, Micro Frontends and the Timeless Principles of Decentralization
Understanding decentralization will help you understand, evaluate & adapt to current technology trends.
Photo credit: Nhia Moua
“Decentralization is based on the simple notion that it is easier to macrobull***t than microbull***t. Decentralization reduces large structural asymmetries.” ― Nassim Nicholas Taleb, Skin in the Game: Hidden Asymmetries in Daily Life
The human body is an amazing system. Optimized over a couple of million years, it seems to work pretty well. Scholar and statistician Nassim Taleb is a huge fan of nature, and as I recently reread some of his work, what stuck with me is the level of decentralization that nature built into our bodies.
We have two kidneys and if one fails, things will be fine. I tore one of the ligaments in my feet and luckily nature provided me with three so I can go on as if nothing happened.
It also occurred to me that most of the revolutions happening in technology companies, microservices, data meshes, or micro frontends really are all coming down to this one fundamental idea that nature has been applying for millions of years: Decentralization.
What is that exactly? I found this quote which I like:
“A strict definition of decentralized systems could be that each system in the structure must fulfill specified demands on interaction with other systems, but it should be possible to develop (and change) the inner structure in each system, including data storage, without dependencies to other systems, as long as the specified interaction stands. It must for instance be possible to insert systems of different origins into the structure. The main condition is that each system must interact with other systems as specified.” – Mats-Åke Hugoson, “Centralized versus Decentralized Information Systems A Historical Flashback”
Computer science as a discipline is pretty new, only a couple of decades old. So it keeps on reinventing itself, which makes it hard to follow on the current best practices. It’s not easy to understand how to adopt a new idea to one own “system” or company. In the cases of trends like microservices, data meshes, or micro frontends, however, I think we can simply look at the timeless principles which have been underlying the idea of decentralization in nature all along.
The five essential principles
There are five essential principles which can be grouped into two categories:
There are drivers of the robustness of decentralized systems over centralized ones:
Redundancy Reduced interconnectedness Diversification
There are drivers of the cost of decentralized systems over centralized ones:
Complexity Gluing things back together
As we are talking about man-made systems here, it’s up to you to choose any level of redundancy, interconnectedness, or diversification for your technology organization. It’s up to you to influence these drivers for certain subareas like data production, which then could result in a “data mesh.” And it is up to you, to decide how your data mesh, your approach to microservices, or your micro frontend approach should look like, adapted to your company’s unique relation to these five principles.
Here, a closer look at these five principles.
The first driver of robustness is redundancy
The idea is simple, if you have at least two parts in a system, they are redundant if one can fail without crashing the whole system. The ligaments in my feet are functionally mostly redundant, one snapped, but I’m still able to do sports just as before. My arms are redundant, but not really completely functionally redundant. With one, I cannot do what I can do with two, still, it is redundancy to have two.
The company Spotify implements its application in a redundant way. They embrace the concept of micro frontends, although as far as I can tell they do not call it that. But it means, that every team essentially has one small “tile” in a large window which makes up their application. So if the team owning the “Discovery Weekly” part has a system crash, you can still order & listen to your music and will barely notice. If the search crashes, you even have a mostly functionally redundant feature with the browsing function.
The company Spotify thus decided to turn up the screw on redundancy to the max they could imagine.
The second driver of robustness is minimizing interconnectedness
Your two kidneys do not share the same “incoming door”, there are two separate branches of the renal artery, which come from the central descending aorta. Thus the interconnectedness is minimized. If one renal artery is blocked, the other will simply pick up the flow.
The same idea is played out in the concept of microservices. Microservices are an architectural pattern that basically tries to break things down, decouple them, and expose some kind of interface for them, a clear contract as the only way “in & out.” It’s just like our two kidneys which of course both have their own ureter, just as each microservice has his own point of data storage. That way, both data storage or the ureter can differ considerably, depending on incoming & outcoming “stuff.” If either the data storage or the ureter fails, the whole system will still be intact because of minimized interconnectedness.
Amazons API mandate
Following the so-called “API Mandate” from founder & CEO Jeff Bezos, the company Amazon transformed it’s IT landscape to one focused on minimizing interconnectedness. In fact, the original “API Mandate” makes it pretty clear, that the only way in and out of a team's internal stuff is through an interface that must be designed such that it could even be used by someone outside the company.
This for instance led to a separation of the data storage of different teams, which in turn leads to much more robustness in case of local breakdowns. But it also led teams to adapt a redundant and fault-tolerant communication practice. Besides increasing the robustness of the system, it also increased the flexibility, without dependencies, any team is free to choose the complete technology stack, deploying different programming languages, choose whatever gets the job done best.
The third driver of robustness is diversification
Your immune system is different from your usual organs. It’s dispersed throughout your body and is composed of a variety of different proteins and cells which are meant to react to very different kinds of foreign material.
The little units in our immune system are diversified, they belong to the same subsystem of the system called the human body, all have the same mission, to keep the body healthy, but are all very different. And since our immune system has to respond to a lot of unknowns, as well as changes throughout time, diversification is the only way to keep it working.
Decentralized systems can have very different degrees of diversification, allowing for different robustness & autonomy levels. The concept of a “data mesh” as used at Netflix provides a rather large diversification level. The concept of a data mesh mostly means, that a team that produces data, really owns it in the sense that it is also responsible for serving it to potential end-users.
Netflix data architecture
But Netflix does not put a tight bound on how teams are supposed to serve their data. Instead, they allow teams to serve data in any of the standard technologies at use at Netflix. Teams can choose AWS S3, Redshift, RDS instances, or druid to provide data to the end-users. Thus this diversification provides for much better adaptability to both individual end-user needs as well as to changes in the environment like new kinds of end-users.
While these three things drive robustness and autonomy throughout companies, they also have a cost.
The first cost driver is complexity
Decentralization means with every step we decentralize, we also add complexity. Complexity because previously where there was one, there now are possibly two. The degree might differ, depending on the degree & kind of decentralization, but the trend is always the same. The micro frontends at Spotify are great, they allow for the flexibility that every team individually can work on their system. They can launch changes individually and even crash individually. But now what previously was one subsystem, the application, just became 10+ subsystems.
Spotify's complexity cost
That means the part that is not decentralized suddenly has to deal with this complexity. And of course, there are such parts, otherwise, the individual tiles would be owned by separate companies, not teams. It means for instance if a developer wants to change teams at Spotify, he has to relearn the way of building his “tile.” It means, if someone from product management wants to get a picture of the “roadmaps” of the complete app, he now has to take a look at ten different roadmaps, or someone has to synchronize them. It means if a central party wants to fix security but in the database technology used across Spotify, they have to do that 10 times, not just once. So there the mere existence of multiple decoupled teams creates complexity costs down the road.
The second cost driver is not about the individual teams, the tiles, but about the glue that holds the window together.
The second cost driver is the need to glue things back together
Mushrooms have many bodies, fruit bodies, connected by a network called “mycelium.” But for humans, nature decided to not tie our bodies together. Probably because it would’ve made walking a pretty weird exercise, being glued together with his family.
The point is, complex systems that have autonomous subparts need to be glued back together somehow. They need to be connected, otherwise well, they wouldn’t be one system. But that connectedness also comes at a cost. For mushrooms, it means, although the fruit bodies are completely independent, a new fruit body is located pretty close to the others. It shares all resources with the other “family members.” This is a significant cost for a mushroom, but probably a trade-off well worth it in terms of survival. (check out Jennings & Lyseks “Fungal Biology: Understanding Fingal Lifestyle” for details)
Gluing data costs at Netflix
The same cost driver is out there in the world. Netflix data mesh provides a large diversity, but that diversity also drives up the cost of gluing things back together. An individual data engineer or BI analyst at Netflix who wants to have access to a large variety of data will either need to individually carry that cost or the company Netflix has to shoulder it.
In this case, Netflix chose to shoulder that cost by for instance creating tools like Metacat essentially gluing back together with different data sources like AWS S3, RDS databases, or AWS redshift. If the diversity wouldn’t be there, if there were only one storage technology, there would be no need to provide a large scale solution to glue things together.
When talking about nature & technology systems, it’s easy to forget that we’re actually dealing with humans.
Decentralizing Human Systems
When I wrote the three drivers or robustness of decentralized systems, I felt like I was missing an important part: the human part.
Today we’re not just decentralizing to make systems more robust. We’re usually decentralizing human systems. Thus the aim is on the one side to use redundancy, minimal dependencies, and diversification to increase the robustness. On the other side, we’re also aiming to use autonomy & responsibility to increase the productiveness of the individual units.
That decentralization leads to more autonomy, and autonomy makes people happy and more productive was already understood by Sun Tzu, Peter Drucker, or probably today most famously by Daniel Pink, author of “Drive”.
All the techniques mentioned above, microservices et. al. are techniques changing human systems. All of them award more responsibility & autonomy to a unit. A unit doesn’t have to be a team, it can be an organizational unit, a single person, etc. Microservices put all the responsibility into a team's hand, especially the operational responsibility after the mantra “You built it, you run it.” Data meshes put the responsibility to deal with end-users of data and serve them properly into the team's hand. Micro Frontends put the responsibility of building the actual frontend components into the team's hands.
So decentralization makes sense whenever we want to do either, increase the robustness of our system, or the productivity of our teams.
Closing thoughts
These principles help me understand how these trends impact me, my system, or the company I work at. They help me understand when having micro frontends is a smart idea, and to what extent a data mesh should be glued back together. They help me see, why certain companies chose a specific application of a microservices other than another. They help me understand why certain companies go for a fully standardized infrastructure stack where others choose a well harmonized “Infrastructure as a Service” approach and let teams do whatever they want.
I hope they help you too.
Further Reading | https://medium.com/swlh/microservices-data-meshes-micro-frontends-and-the-timeless-principles-of-decentralization-2ac2516b2951 | ['Sven Balnojan'] | 2020-10-13 18:50:29.914000+00:00 | ['Technology', 'Software Development', 'Startup', 'Microservices', 'Architecture'] |
1,445 | Consulting with Tech — the New Digital Era | By Tracy Cheng
The world has entered a new digital era in the recent decade. While you may have been aware of the rise of payment apps, online trading platforms, virtual banks, as well as all kinds of FinTech, the consulting industry is also undergoing a technology revolution that poses unprecedented challenges to all consultants.
Here comes a frequently asked question: how is the consulting industry transformed by digital technology?
In the past, clients were coming to consultants for issues including management, organizational structure, marketing and promotion, etc. where consultants are expected to take into account the organization’s resources and capability to formulate tailor-made solutions.
Cases are, however, not that ‘straightforward’ today. In the new era, clients are experiencing an increasing amount of tech issues in data collection and analytics, generation of insights from raw data, social media marketing, and even digitalized workflow within the organization. They do not want mere solutions or strategies. They are looking for ‘Strategic Technology’ — innovative solutions to utilize technology in company’s operations. It is the change in clients’ needs and objectives that prompt the transformation and digitalization of the consulting industry.
In light of the ongoing transformation, consultants are expected to learn, understand and leverage tech. It is of paramount importance for consultants to keep themselves updated with the latest technology developments and breakthroughs, as well as being critical of market changes and evolutions. Disruptions in the traditional consulting industry also explain the rise of ‘Technology Consulting’ — a consulting branch with remarkably extensive potentials for development in the Digital Era. A comprehensive ‘Tech Mindset Toolkit’ would certainly be a must for all consultants, not limited to Technology Consultants. | https://medium.com/@180degreesconsultinghku/consulting-with-tech-the-new-digital-era-84fbc7e2a9a7 | ['Degrees Consulting Hku'] | 2020-12-08 03:30:49.326000+00:00 | ['Student Voice', 'Consulting', 'Technology'] |
1,446 | FOMO by Design: How Social Media Is Hacking Our Brains | FOMO by Design: How Social Media Is Hacking Our Brains
Understanding how algorithms manipulate our behavior and what to do about it
Photo: Yiu Yu Hoi/Getty Images
On my recent birthday, only four of my 711 Facebook “friends” wrote on my wall. It was tempting to assume that people scrolling their news feeds saw it was my birthday and thought “Nah, not interested.”
My rational brain, however, knew it wasn’t my friends who lacked basic decency, but the algorithms that ran their online social behavior. Being an occasional user of Facebook, the algorithm doesn’t freely grant me visibility to others — part-timers like me have to work for it. So I played ball and posted a photo of me enjoying my birthday. My motivations for doing this were mixed — part of me wanted to see how the algorithm would respond, but a bigger part of me irrationally feared I was being shunned and needed validation that this was not the case. Having met the algorithm’s demands, within an hour I was granted visibility on others’ news feeds — now people wouldn’t stop writing on my wall.
Being an occasional user of Facebook, the algorithm doesn’t freely grant me visibility to others — part-timers like me have to work for it.
Instead of finding the whole thing ridiculous, I felt a strong sense of social acceptance that contrasted deeply with the rejection I had felt just hours before. I felt significant again, and spent the rest of my birthday on my phone reading and responding to all my messages.
This wasn’t exactly The Great Hack, but this kind of preying on our brains’ desire for tribal acceptance happens every day on social media. Not getting any likes on our post can feel as painful to our brain as being cast out from a tribe that ensures our survival. This leaves us vulnerable to manipulation by social media apps, but we don’t have to be.
Avoidance of pain
To the primitive brain, social exclusion is a serious threat to survival, and avoiding it drives our behavior. We seek out new social connections and become sensitive to social cues. This enhanced social awareness then allows us to tailor our behavior, and increase our likelihood of social acceptance. Back in paleolithic times, survival depended on social connections. Was it this basic human need the algorithm tapped into on my birthday?
In the study “Does Rejection Hurt” published in Science, researchers assessed how our brain interprets social exclusion. First, they placed subjects in a functional MRI (fMRI) machine — like a normal MRI that also measures blood flow to different brain regions and gives us an idea of where brain activity is highest. The subjects then played a video game called Cyberball, where they throw and catch a ball to two other players using a two-button response pad, while inside the fMRI scanner. In the first group of subjects, the other two players started throwing the ball only to each other, giving the subject a sense of social exclusion. Anyone who is bad at football will know that painful feeling of no one passing you the ball (it still hurts). They compared these subjects to a second group who weren’t made to feel excluded. The excluded group experienced increased blood flow to the same parts of the brain that light up when experiencing physical pain. The avoidance of pain is one of the strongest drivers of human behavior.
This kind of preying on the confused tribal brain happens every day on social media.
Social media has been shown to induce the same sense of ostracism as those subjects who were excluded in Cyberball. Receiving few or no likes on a post can lead to poorer self-esteem, reduced belongingness, and perceived ostracism. This effect is thought to be mediated by seeing a lack of likes as a “social exclusion signal,” which is going to have profound effects on our online behaviors.
Different people respond differently to receiving likes, some being more sensitive than others. Most appear to be less bothered about the total number of likes and care more about who is liking their posts. We want those close to us to like our awesome selfies, but we especially want those with high social value to like them. Of course, this doesn’t always happen, so how do we cope when our posts are met with radio silence?
When a post receives few total likes, users often attribute this to an algorithm not favoring their post and people simply not seeing it. But this same rational thought process doesn’t seem to occur when we question why certain people haven’t liked it. You’re able to reassure yourself that an algorithm is why your post wasn’t as popular as you’d have liked, but why didn’t your sister like it? Is she upset with you? Maybe she is jealous of how gorgeous you look? We feel most socially excluded when relationally close people don’t like our posts, even though it’s possible that they simply didn’t see them.
If feeling socially excluded drives us to reach out to others and become sensitive to social cues, then soon we’ll fall into a “social-validation feedback loop.” Facebook’s ability to tap into these basic human drives raises concern over the power that one unelected CEO can have, particularly when such powers hold little accountability.
Seeking pleasure
The uneasy feeling that results from phony “social exclusion signals” puts us in a vulnerable state. Social media then baits us with variable rewards, an algorithm might show one post to more people and we end up with an inordinate number of likes on one post and none on another, even though they both featured an almost identical selfie. This is why people spend hours pouring money into slot machines — our brains get hooked by the uncertainty of when a reward will arrive.
Every aspect of social media is designed to pull you in, from infinite scrolling mechanisms and autoplay to the never-ending notifications, and it is easy to get hooked. These methods are so effective you may even get “phantom notifications” — that vibrating sensation you get only to find your phone isn’t in your pocket.
The “like” function, while a relatively new concept, taps into that ancient evolutionary drive to always be on the lookout for signs of social acceptance — as do upvotes, retweets, +1s, reaction buttons, and all the other illusory forms of social value. They are the “social currency” of apps like Facebook and Instagram, and receiving them can activate similar pathways in the brain to financial rewards. The reward seems to be greater depending on who the like is coming from, with those closest to us or of higher social value eliciting a stronger reward. A like from someone of high social status is worth its weight in gold — to the brain at least.
If we’re not driven to use social media to avoid the pain of rejection, then we are hypnotized by the allure of social acceptance.
But giving likes activates the same pathways, even when liking posts of total strangers. If someone has liked our posts, we are more likely to like theirs because of the overlapping nature of these reward pathways. The more likes a post has before you click like, the higher the activity in the reward pathways. This is due to “social proof:” The belief that something must be valuable because others say so. This is why it’s easier to turn 10,000 followers into 50,000 than it is to turn 10 followers into 500 — social proof leads to exponential growth, and the socially rich get even richer.
All of this serves to get us hooked and spending more time on our phones instead of real-world social interactions. Why put in all that effort to be sociable when you can just click some buttons and get similar rewards? This effect is only going to be magnified in the current pandemic and the socially distanced world we now inhabit.
If we’re not driven to use social media to avoid the pain of rejection, then we are hypnotized by the allure of social acceptance. But there is another, equally powerful force that glues us to our phones and hijacks our free will.
Relief of FOMO
In the history of our species, the “fear of missing out” has never been greater than it is today. FOMO is that anxious feeling we get when faced with the realization that others are having fun without us. It is a powerful driver of human behavior, from reckless investment decisions to desperate use of social media — it compels us to act. FOMO instills a sense of urgency that can only be relieved by getting involved in whatever we believe we are “missing out” on. Or at least, that’s what our brain tells us.
In a study looking at the social media habits of 2,663 teenagers, FOMO was found to be a strong predictor of social media use. The more prone to FOMO the teenagers were, the more frequent their use of social media, and the higher the number of social media platforms they used. Other studies have linked FOMO in the context of excessive social media use with higher anxiety, depression, and lower perceived quality of life.
FOMO can also reduce our capacity for mindful awareness. It is hard to be in the present moment when you have a persistent concern that others are reaping rewards without you. Even having a conversation while experiencing FOMO is challenging. This is why that friend of yours is constantly checking her phone instead of listening to your riveting tale — a practice known as “phubbing.” This behavior has been shown to negatively affect both professional and romantic relationships.
The popularity of campaigns like the recent #challengeaccepted on Instagram, where women post black and white photos of themselves and tag other women who have made them feel empowered, may be partly explained by their ability to induce FOMO. Imagine seeing a friend of yours accept the challenge and tag someone other than you — if you are susceptible to FOMO, this anxiety may compel you to get involved. If you don’t experience FOMO, you may look at the situation rationally and consciously choose to get involved because you support the cause. While these campaigns can be helpful in building awareness of important societal issues, they also drive social media use and strengthen the “social-validation feedback loop.”
How to regain control of your free will
Social media apps will always try and get into our minds and control our behavior, but this is the goal of any business. “Brain hacking” isn’t new. Since newspapers first started selling advertisements in 1833, businesses have been trying to get into our brains and influence our behavior. What scares us is how effective social media is at this, how insidiously it has made its way into our lives, and how complicit we have been in allowing it to happen. So how can we take the power back?
The reality is that social media is probably here to stay, so, like a loud and obnoxious neighbor, we should find a way to live together harmoniously.
When dealing with addictive behaviors, there are two main approaches to managing them: the first is abstinence, which is generally the most effective (but most restrictive) solution; the second is harm minimization, which involves continuing the behavior but putting measures in place to reduce the harm it may cause.
Abstinence from social media seems to be increasingly popular, and many tout its benefits. But quitting social media can be isolating, with a legitimate “fear of missing out” on watching your friend’s children grow, or staying in the loop with extended family. If you want to play it safe, abstinence could be the best option.
But the reality is that social media is probably here to stay, so, like a loud and obnoxious neighbor, we should find a way to live together harmoniously. Here are some suggestions on how we might do that:
Limit screen time — use the “screen time” feature in iOS and Android to put a limit on how much time you spend on social media apps. This will interrupt the “social-validation feedback loop” and limit its effect on your brain. One study showed that reducing use to 30 minutes per day had a significant impact on well-being. Have a social media “detox” — take a break and gain some perspective over your social media use. Try a #ScrollFreeSeptember and see how you feel. Gain awareness of why you are using social media — the more you understand what is driving the behavior, the better you can eliminate unhealthy habits. Checking your phone whenever you are bored, even if you have no notifications, is a bad habit. Use social media for a clear purpose, like writing happy birthday on my wall, rather than for a dopamine hit. Put up a “firewall” between you and your screen — most of the rewards from social media are delivered visually, so try reducing the glamour of your newsfeed by putting your phone in grayscale. You’ll be amazed at how boring everyone’s posts look.
Whatever approach you take, when you do get that uneasy feeling of social rejection, the intense anxiety of FOMO, or the pleasant hit of dopamine from all those likes — remember that it is all an illusion created to steal your attention away from the things that really matter. Use social media to stay connected to friends and loved ones, promote your business, and organize events, but don’t use it to fill a void that social media itself has created.
A narrated version of this article can be found here. | https://onezero.medium.com/fomo-by-design-how-social-media-is-hacking-our-brains-1700561a10ae | ['Dr. Adam Bell'] | 2020-09-04 05:31:01.096000+00:00 | ['Psychology', 'Technology', 'Digital Life', 'Social Media', 'Algorithms'] |
1,447 | How communication done at Kulkul | We communicate every day, how many times you ask yourself how effective your communication is? Like any other business, communication is at the center of Kulkul. As a people-driven business, communication has always the core activity in our business, and in this article, I want to share with you how we communicate to each other both the concept and the tactical how-to.
This subject is something I repeatedly share with my team without feeling tired. I believe having the communication right is critical whether you run a big company or building a company from scratch like me.
Photo by Icons8 Team on Unsplash
Debate vs. Discussion vs. Dialogue
There are three modes of communication. These modes I didn’t invent myself but I learn from the CatalystX from edX. Each mode will result in different results. We will discuss one by one and will share what is the caveats for each mode.
Debate
Photo by Edvin Johansson on Unsplash
The first mode of communication that people often use is debate. In debate mode, the goal of the communication is to find who’s right and who’s wrong or it can be what’s right and what’s wrong. Using this mode, it is hard to achieve constructive communication because usually people just shout out to each other without listening to one another. People just said, “it should be this way”, but without helping one another why this way or why that way.
The debate mode usually happens in political communication or in a heated situation where people do not focus on solving the task at hand and focus more on the whose win.
Discussion
Photo by Amy Hirschi on Unsplash
Discussion is the second mode of communication and the most common among the three. This is where the average organization having communication. It is better from having a debate but this is not enough if you want to build an inclusive workplace.
The goal of the discussion is to find a conclusion. Having a conclusion is good, but imagine a situation where a conclusion taken in the discussion but there are still people who spoken or unspokenly said: “yes, but …”. They want to be heard but the conclusion is taken. In this kind of situation, it is time where discussion is not enough.
The most common situation when we at the discussion mode is when someone explaining something complex and then people just said: “Ok”.
The most common situation when we at the discussion mode is when someone explaining something complex and then people just said: “Ok”. This is not helpful because the people who explain the complex stuff might not know whether the person they talking with understanding what they talk about or not. We will share how to tackle this problem in the Dialogue Section.
Dialogue
Photo by Ann Fossa on Unsplash
Last but not least, where we have a dialogue mode. In dialogue, in contrast to the debate and discussion mode, the goal is to understand each other. So there’s a huge emphasis on listening to other people and confirm whether what we understand is correct or not. In the dialogue, the conclusion might happen, but it is not the end. The ultimate goal is to understand each other understanding, confirm what we understand is correct and later we will act on the shared understanding to solve the problem at hand.
Listening Technique
Parroting
I mentioned earlier that listening is a key part of dialogue communication. Imagine you’re in the situation I share in the previous section, in a discussion, we already have a conclusion but there’s a person who still not fully agrees with the final conclusion. They do not agree because they are not heard or they feel their understanding of the problem is not taken into account.
These are things we want to avoid. One of the ways to avoid this problem is instead of only saying “Ok” when people explain something to us, try to repeat in your own word what you’ve listened to. For example,
Don’t do this!
Michael: Incoming inbound leads we have is low for the last few weeks.
Saru: Ok.
Michael: Hmm.. It think we should rethink another strategy.
Instead, do this!
Michael: Incoming inbound leads we have is low for the last few weeks.
Saru: Oh so for the last few weeks we have less leads than usual?
Michael: Yes, I’m thinking to create another strategy.
Saru: Is it expected due to the Holiday season?
Michael: Ah right, what we should do instead to make our sales team productive in this holiday season?
Saru & Michael: There are lot more discussion here.
By implementing parroting, we not only increasing the understanding between parties that involved in the discussion, but also enriching the discussion to something more meaningful.
Watch other people’s lips while they’re talking.
This is tips I learned in the One Month Product Management course. In the course the instructor share tips on how to listen better. One of the tricks is watching other people’s lips while they’re talking and make us focus to listen to them and not distracted by anything that might distract us. Remember our mind is a master procrastinator and prefers something more enjoyable than something hard and taking more energy like listening to other people.
Practices makes perfect.
To be successful all those practices take time before it feels like second nature, so I recommend you to keep practicing this technique and evaluate what you have learned while practicing this technique.
When I practicing this technique, what I always imagine is a game where I make a point if I understand what people talking about and try to provide value in the discussion by giving opinions or valuable feedback in the communication. So make your point by understanding people, and then once we understand them we can have more meaningful communication.
Bonus
I record a video in Bahasa months ago discussing about communication and how we can do it better. | https://medium.com/kulkul-technology/how-communication-done-at-kulkul-997273d33b39 | ['Abdurrachman'] | 2020-12-26 07:28:36.057000+00:00 | ['Remote Working', 'Inclusive Workplace', 'Business Management', 'Communication', 'Kulkul Technology'] |
1,448 | The Singularity of Knowledge | Synergy
Beyond imagination, the human mind is also gifted with the ability to decipher patterns — to make sense out of nonsense.
Especially, we seem to love common denominators.
So much so that the biggest breakthroughs in science have revolved around unification as much as they have around discovery. Routinely, we’ve made paradigm-shattering discoveries by simply tying loose ends together, and we continue to operate under this ambition (it can be said that our next target in line is dark matter).
The greatest minds in history have understood this need for unification to be the ultimate prerogative. Some, like Nikola Tesla, had subsequently failed in their connecting of certain dots while others, like James Clerk Maxwell, had become famous for it.
The problem is that it’s not easy. Far from it.
As clever as we are, we’ve compartmentalized our systems of knowledge into such distinct and divided segments of study that it’s near impossible for one student to embark upon two opposing streams of belief, something that had been the norm only a hundred years ago.
The noösphere promises us a rekindling of this comprehensive approach to understanding our world. With its synergetic potential and it’s touch-point responsiveness, it holds the ability to take all that we’ve chopped up and bring it back together, even if for a moment, just to see if anything blends together comfortably, anything that we hadn’t had, or couldn’t have had, previously considered.
Because, and this is the main point to digest, the noösphere is able to do something that we ourselves have a hard time doing. It can discern and catalogue, cross-boundaries and synthesize streams of information. It can employ numerous algorithms that would take us an absurdly long time to match in terms of efficacy.
Sounds like A.I. doesn’t it?
It doesn’t necessarily have to be, though artificial intelligence will certainly be an integral part of its picture, as it currently is.
The noösphere is the environ. We are the data points.
Twitter lets political discourse unfold in real time. Instagram lets people share their experiences with a taste of immediacy. TikTok, well, it may serve useful in some respect one day.
Quora, Reddit, Wikipedia. All far from perfect, but we’re getting there.
Once we’re able to communicate faster and better and once we’re able to contextualize and idealize more comprehensively than ever before, we’ll see the connecting of a new array of dots that we hadn’t previously thought possible.
Knowledge will come together, under a real singularity, and harmonize itself to a point whereby we’ll have as comprehensive of an outlook as we can imagine.
Whatever this really means (and it may mean many very different things), it will be the milestone of our civilization.
Technologically, socially, environmentally, astronomically, biologically — information will reach the apex of interconnectedness; in so doing, we’ll have the most informed understanding that there can possibly be (correlating to our rate of new discoveries) at any given time.
Our segregation of various fields of study will no longer be isolating; our subjective experiences and insights will no longer be so subjective; our vision will no longer be obstructed by division.
The singularity of knowledge — it’s already happening, but it’s about to speed up to rates we won’t even realize until we’re able to look back on it.
Our only obligation, it seems, is to nurture this process rather than standing back and watching it unfold on its own under the presumption of a far-and-away singularity that we don’t have enough time or imaginative power to consider.
In essence, we are the singularity. | https://medium.com/predict/the-singularity-of-knowledge-5b60b04892a6 | ['Michael Woronko'] | 2020-12-02 15:20:37.627000+00:00 | ['Philosophy', 'Technology', 'Future', 'Knowledge', 'Science'] |
1,449 | Good News For Gmail Users | The world’s most popular email service “GMAIL”, and Google has made it even better.
In a blog post that users will now be able to make changes to Gmail without having to save Microsoft Office files to Google Drive.
Earlier, Google had provided the facility of editing Microsoft Office files, but it had to resort to Google Drive.
But with this new update, users will be able to edit documents within Gmail without having to save them to Google Drive.
Whenever user receives a Word or Excel file in an email, they will be able to do any custom editing.
Users will also be able to reply to the original email thread with the update file.
Google introducing mixed page support in Google Docs and editable Word or Excel files.
Soon will also introduce support for inserting images behind text and watermarks into Google Docs next year.
Google also said that it will introduce another feature for users that will make it easier for them to switch from Excel to Sheets. | https://medium.com/@tehnologijaviews/good-news-for-gmail-users-242b9c238913 | [] | 2020-12-13 14:49:28.567000+00:00 | ['Google', 'Gmail', 'Technology', 'News', 'Email'] |
1,450 | Using AWS Porting Assistant to Migrate From .NET Framework to .NET Core | A few days ago, João Malés wrote an extensive article on migrating from .NET Framework to .NET Core. Back then, we had a considerable challenge: more than 60 projects with many third party dependencies, no previous experience on similar work, and the additional constraint of migrating in the same codebase that was going out to developers.
In essence, creating a new Service Studio while bringing value to our customers and having weekly releases in the existing Service Studio. Easy, right?
Now, the challenge was different: we had well-documented (and publicly shared :) ) knowledge from that experience, and more relevantly, we only needed to migrate a part of the codebase from Integration Studio (another OutSystems IDE).
Change things… fast!
Still, this was an elephant task. To divide it into bite-size pieces, we decided to experiment with other tools, such as Amazon Web Services (AWS) Porting Assistant and CodeTrack.
But first things first.
Do You Need to Migrate Everything?
As I’ve mentioned, migrating Service Studio to .NET Core was complex due to all the constraints. Let me highlight two:
We had to migrate (almost) everything.
Duplicating code was not an option because we had more than 150 engineers working in the same codebase. We couldn’t have everyone duplicating the same developments on both solutions all the time.
So, ask yourself:
Do you need to migrate everything?
Can you duplicate the code, or do you need to change it continuously?
Can you halt operations to perform this work?
In this case, we only needed to migrate a part of the code, and duplication was an option given the smaller scope. This time around, we took a different approach.
A New Project
Instead of changing the car wheels while in motion, we opted to create a new, separate solution, from scratch, in .NET Core and then move the required code to the new solution. “What if someone needs to change the code you are duplicating?” you may ask. Well, you either create a test that fails if someone changes that code, or you put alarmistics “on build” (using, for instance, Directory.build.targets — I’ll write about it in my next post, stay tuned).
But before we copied the code, we needed to understand the critical path of the code we wanted to migrate. We knew the entry point but needed to understand all the paths the entry point went through to provide the final result. To do that, we experimented with a tool called CodeTrack. CodeTrack is a free .NET profiler and execution analyzer. There are other tools out there, but we went with this one.
Using CodeTrack to profile an application
It’s pretty simple: set the executable you want to profile, the optional arguments, it will run the app, and that’s it. For us, it was perfect! The result is a file with the profile result. Click Analyze, and, apart from other goodies, you’ll get exactly what we did: a code trace.
Profiling result
In this example, you can see that it started with one thread in “Main,” then called a Start method from the Car class, then a Console Writeline. With this, we had everything we needed for the new .NET Core solution. At this point, we were still researching, so our next step was to estimate how expensive it would be to migrate to .NET Core.
AWS Porting Assistant
In the article, João Malés referred to Microsoft’s Portability Analyzer:
“This tool focuses on analyzing your code and giving you a thorough report afterward regarding the compatibility between your current framework and the selected target frameworks. However, while the tool can give you a great starting point, don’t trust the results blindly. There are some false negatives, mostly regarding third-party libraries.”
With this in mind, we decided to try out another tool. In between the work we did in Service Studio and this challenge, AWS launched the AWS Porting Assistant for .NET (July 2020). You can read the public announcement here. Recently, they even open-sourced a part of it.
Like everything else out from AWS, the UI is as simple as it gets. The setup process is not seamless, though: you’ll need an AWS account with specific permissions. But after that, it’s effortless. Just provide the solution you want to assess, and it will give you all the goodies one can expect.
AWS Porting Assistant UI
Let’s take a look at the relevant information:
Incompatible packages: if your project refers to other libraries, be it through direct assembly reference or NuGet package, it will highlight them here. Heads-up: if the tool does not have access to the sources (like the source code of the package), it will consider it incompatible. You’ll have to go after those one by one.
Incompatible APIs: here, you’ll find all the API calls in your code that do not exist in .NET Core. I’ll provide more details on this below.
Portability Score: quoting AWS, “This score is an estimation of the effort required to port the application to .NET Core, based on the number of incompatible APIs it uses.”
Now, if we dig in a bit deeper, we’ll get to the solution assessment, which contains a more detailed explanation of each of the projects that comprise your solution.
Overview of project compatibility
Here, you’ll have the Projects and all the same information from the overall solution (Incompatible packages, Incompatible APIs and Portability Score), plus a few more interesting tabs. The Project References is a cool graph with the dependencies but, to be honest, we didn’t benefit much from it this time. However, it was useful when we migrated Service Studio since it was a big solution with many dependencies.
Project references graph
The NuGet packages tab is just what I mentioned before but in detail. Whenever the tool does not have access or know the package, it will mark it as incompatible and you’ll have to go to the source to analyze it.
APIs and Source Files, on the other hand, are little treasure troves. APIs will highlight what is not available in .NET Core. For some of them, the tool can give you a proper replacement suggestion. For others, like Windows Forms or WCF… well, you’ll have to rewrite your application. Windows Forms is highly dependent on Windows APIs, so there’s no way around it.
API compatibility
The Source Files tab was the coolest feature. It will guide you on needed changes to each file and recommend a ‘fashionable’ replacement. Here’s an example:
After you’ve analyzed everything, you can either export the report or even port the projects using the tool. You can always return to the analyzer later, as it saves the results. If you use the tool to do the port, you can do it “in place” or copy it to a new location:
Using the tool to perform the migration
In the end, your solution most likely won’t compile because of the non-compatible packages and unreplaceable APIs (for instance, Windows Forms). You’ll have to do the manual work.
In our case, as we only wanted to port a part of the complete solution, we got lucky: the hard part — Windows Forms and WCF — were left out and most of the manual work was actually quite simple.
Choosing the Right Tools
Whenever I do a technical presentation about different software options, I always get hit by the question: which one is the best? As with everything in engineering, it depends. What’s most important? Cost? Speed? The developer’s learning curve?
The same principle applies here: is AWS Portability Analyzer better than Microsoft’s? Well, maybe. Or maybe you don’t have an AWS Account, nor you understand how it works, and you have Microsoft expertise in the house. Or perhaps you value UI simplicity. It depends. From my personal experience, I enjoyed the tool. The replacement suggestions are very useful, particularly if you’re on a research task.
As for porting to .NET Core, although this challenge was simpler than the one João Malés described, there’s no way around the fact that you’re moving from a big fat framework (.NET Framework) tightly coupled to Microsoft APIs to a stripped-down, cross-platform version (.NET Core). Unless you’re really lucky, you will have plenty of manual work ahead of you, and what’s more concerning, surprises. These tools result in many false negatives (even AWS Portability Analyzer failed to alert on some non-compliant calls, like ConfigurationManager.AppSettings[]). Properly account for them if you don’t have prior experience migrating from one framework to another.
Disclaimer: software evolves and these AWS Portability Analyzer features may no longer be available (or be different) when you read this post, particularly the UI. Nevertheless, the concepts and takeaways will likely remain valid. | https://medium.com/outsystems-engineering/using-aws-porting-assistant-to-migrate-from-net-framework-to-net-core-7d7de72925f3 | ['César Afonso'] | 2020-12-29 11:48:27.388000+00:00 | ['Aws Portability', 'Dotnet', 'AWS', 'Technology', 'Software Development'] |
1,451 | A deep dive into the world of impact investing | TL;DR: Impact investing is maturing and some exciting things are happening.
In June 2021, for the second time running, I held a Masterclass (for lack of a better word) on Impact Investing for the latest cohort of Future VC, a programme run by Diversity VC to encourage more people from underrepresented communities to enter the venture capital (VC) industry. So in the spirit of opening up VC, here’s my deck from the Future VC class as well as an accompanying longer blog hopefully demystifying the world of impact investing.
As I was updating my deck from last year, one thing became apparent. More VC funding is available for impact-driven startups in Europe compared to the previous year. It’s wonderful to see the sheer amount of European funds that have emerged in the last year alone, all dedicated to impact one way or another — with the likes of Pale Blue Dot closing an €87M fund to invest in tech companies tackling the climate emergency, Revent in Berlin with an initial close of €20M backing ‘for profit, for purpose’ startups, Remagine in Berlin securing €24M, providing high growth, impact-led startups with revenue-based financing and more. In general, the impact investing industry spans various asset classes and has seen incredible growth over the last few years. But personally, the massive growth of the sector in early-stage VC is by far the most exciting development. Especially when more and more founders are looking for value-aligned investors to add to their cap table because it matters who is on your cap table. And thankfully, there’s no shortage of impact VCs now.
Still, many don’t actually know what impact investing is and often peddle quite a few misconceptions. Before we dive deeper, let’s address the 🐘 in the room. ESG is all the rage right now, especially in the VC industry. ESG investment is set to grow rapidly, and that’s great. We need more companies and VCs to get their house in order and care about doing the right thing across the environmental, social, and governance domains. VCs in particular need to step up when it comes to considering the broader risks to human rights violations. This report from Amnesty USA highlights that none of the world’s top ten largest VC firms have sufficient human rights due diligence policies in place.
It’s also important to highlight that a screening process that’s merely focused on avoiding harm (often referred to as ESG risk management) or that might benefit stakeholders (often referred to as pursuing ESG opportunities) doesn’t always result in net-positive outcomes. On the other hand, impact investors are driven by the motivation that it is not enough to only avoid harm in investment decisions. Instead, impact investors make the conscious decision to use capital to contribute to solutions. With a multitude of pressing social and environmental problems from the climate crisis to insecure work, to increasing poverty and racial inequality, and more, we actively need to invest in purpose-driven companies looking to drive positive outcomes for people and planet.
At BGV, we believe technology plays a big part in addressing these challenges, which is why we’re focused on investing in tech for good companies driving both purpose and profit at scale. That’s not to say that technology is the magical formula to all our problems. Despite being a strong advocate for #TechForGood, we try really hard not to further the narrative of tech solutionism — and if you’re unfamiliar with the concept, read this brilliant article from Evgeny Morozov.
“Why would a government invest in rebuilding crumbling public transport systems, for example, when it could simply use big data to craft personalised incentives for passengers to discourage journeys at peak times?” In the Guardian, from Evgeny Morozov, “The tech ‘solutions’ for coronavirus take the surveillance state to the next level”
Many of the issues we’re faced with stem from a long legacy of systemic discrimination, inequality, and racism and won’t be solved by tech. But we can explore how tech innovations can significantly contribute to better outcomes for people and planet in spite of it. Let’s face it — the world is f***ed. The latest Sustainable Development Report 2021 highlighted how the COVID-19 pandemic and resulting economic crisis is a setback for sustainable development everywhere. Last week’s IPCC report is a stark reminder that the climate crisis is rapidly getting worse. That’s why we need to be more conscious about the investments we make. What impact investors aspire to do, regardless of whether an investment is into a tech company or not, is to ensure that the investments we make today have a material effect on important positive outcomes for underserved people and the planet.
In a world where billions (yep, BILLIONS) have been deployed in Europe in 2021 alone (during a pandemic and economic crisis, I might add) to ensure consumers get their arrabbiata sauce in less than 10 minutes, investors need to understand how their deployment of capital will be judged by others (history has its eyes on you). For further reading on this topic, read this op-ed in Sifted from Johannes Lenhard.
But without further ado, let’s dive deeper into the world of impact investing.
What is impact investing?
To put impact investing into context, it’s always good to remind ourselves that all investments have impact — both positive and negative. The most common definition for impact investment derives from the Global Impact Investing Network (GIIN), which defines impact investments as investments “made with the intention to generate positive, measurable social and environmental impact alongside a financial return.” The very nature of this definition allows it to focus on an investor’s approach to investing rather than the membership to a specific asset class.
‘Impact’ in impact investing is broadly defined as any meaningful positive change due to specific actions. Intentionality plays a significant role in the impact investing world, both in terms of investors’ intentions to allocate capital to drive better outcomes and assessing founders’ motivations and ambitions to start and run a business that has purpose at the core of it. Impact investors seek a financial return, albeit across a whole returns spectrum running from below-market-rate to risk-adjusted market rate. But perhaps the most distinguishing characteristic is impact measurement, dubbed the hallmark of impact investing, signalling the commitment of investors to measure and report on the social and environmental performance of portfolio companies.
However, it’s worth noting that not all investments promoting sustainable development are classified as impact investments. Take investments in electric vehicles as an example. A traditional VC, Investor A, might invest purely because of the expected financial returns, as the electrical vehicle market is growing rapidly and adoption is promoted and supported by regulation. On the other hand, Investor B, who is an impact investor, lists the potential for a massive reduction in carbon emissions as one of the main motivations to invest and has the critical intent to deploy capital to drive positive environmental outcomes, but also seek significant financial returns. Is one investment approach better than the other? Perhaps. Perhaps not. But you can pretty much assume that the impact investor will demand a higher degree of accountability from the company to evidence and measure the environmental outcomes than the traditional investor unless they have a strong ESG mandate.
Sizing the market
Impact investing is a rapidly growing field, and the impact investing landscape has gradually become more prominent over the last decade. According to GIIN’s 2020 Annual Impact Investor Survey, the full impact investing market size is $715 billion, covering the assets under management (AUM) of over 1,720 organisations globally. The IFC estimated investor appetite for impact investing to be as high as $26 trillion, of which nearly $5 trillion (19%) is in private markets, involving private equity (PE), non-sovereign private debt, and VC. The market offers great potential, and new investment opportunities and vehicles are emerging to finance and support impact-driven founders enabling founders and investors to pursue both impact and profit as they become inextricably linked.
The industry has undoubtedly witnessed significant shifts from consumers and investors alike, who increasingly want to put their money where their mouth is. Research into consumer trends from Zenogroup’s Strength of Purpose Study (2020), Deloitte (2019), and many others suggest there are increasing shifts in consumers’ purchasing behaviour towards purpose-driven businesses and brands they trust to do good. Simply put, people want to buy products and services from companies they perceive to do good and drive social impact.
“Purpose-driven companies witness higher market share gains and grow three times faster on average than their competitors, all while achieving higher workforce and customer satisfaction.” Deloitte, Purpose is Everything, 2019.
On the investor side, research suggests that individual investors and limited partners (LPs) increasingly care about the social and environmental impact of the companies and portfolios they invest in. For example, Atomico’s 2020 State of European Tech Report suggests that 45% of LP respondents to the survey require their GPs to report on their portfolio’s social and environmental impact, and 41% are considering implementing this requirement. Arguably, demographic shifts and generational wealth exchange have also meant that more and more individuals are increasingly interested in impact investing. For example, a 2019 study from Morgan Stanley suggests that more than 8 in 10 US individual investors now express an interest in sustainable investing. And we see this shift in consumers with attempts to democratise and open up investing through impact investing platforms like Tickr, who say that 90% of their users are investing for the very first time.
In the past, you might have come across asset owners thinking investing in impact violates their fiduciary duty because it doesn’t maximise risk-adjusted returns (think pensions who manage millions and millions of individuals’ retirement savings). But nowadays, asset owners see the threats to the long-term value of their assets and the wellbeing of their clients, critically due to climate change (again, we’re f***ed, if we don’t act). More and more asset owners see clear benefits from integrating impact, ditching the trade-off mentality regarding investment performance pursuing purpose and profit. A 2020 survey from Cambridge Associates found more than half of the respondents, with the majority being foundations and endowments, taking active steps towards sustainable or impact investing. Many increasingly recognise how the state of the world affects their responsibility to beneficiaries and are adopting new interpretations of fiduciary duty. For instance, this group of pension funds is working on practical steps to integrate sustainability into investment practice, realising that climate change threatens the fund’s ability to uphold its fiduciary duty. Clearly, we should all ask ourselves what world we want to retire into. We can ensure that asset owners, especially stewards of long-term capital like pension funds, actively use their capital to drive change. And BGV’s recent acquisition by Connected is one of hopefully many more emerging examples celebrating this shift in VC.
Impact measurement, the hallmark of #impinv
Translating intent into action is perhaps the most significant advancement in the maturing sector with increasing levels of sophistication in impact measurement and management (IMM) practices. Robust IMM practices certainly help founders in various ways. Adhering and setting standards helps founders crucially protect their purpose early on, validate their product or service and create an impact-driven fundraising narrative, as impact investors raise the stakes in their due diligence. All this applies to general partners (GPs) as well ;)
Impact measurement seems complex, but it really isn’t (though I work with a bunch of people who love this stuff as much as I do — so I’m definitely in my own little filter bubble). There are different methods available, and this guide from Best and Harji at Purpose Capital provides helpful context for the various processes to measure and manage impact. Generally, impact investors have many resources to work from, which can be distilled into principles, frameworks and standards that provide guidance on setting impact objectives, measurement and reporting on impact performance. According to the GIIN 2020 Survey, the most commonly used frameworks were the UN Sustainable Development Goals commonly referred to as the SDGs or Global Goals (73%), the IRIS Catalog of Metrics (46%), IRIS+ Core Metrics Sets (36%), and the IMP’s five dimensions of impact (32%). Most investors in the sample (89%) use a blend of three tools, systems or frameworks to measure and manage their impact, with only a small proportion using proprietary methods.
Figure from GIIN 2020 Survey showing the use of tools, frameworks, and systems, by purpose
Let’s look at some of the various IMM tools that help investors translate intention into impact results.
Principles: In the impact investing space, various sets of principles serve as the foundation for broad rules and best practices for the industry. They differ from frameworks and standards, as they more often than not communicate intent and might require a public commitment strengthening accountability in the space. Relevant examples include the UN Principles for Responsible Investment (UNPRI), which launched in 2006, and the IFC Operating Principles for Impact Management, commonly referred to as the Impact Principles, established in April 2019.
Frameworks: Frameworks are specific methodologies that translate impact principles and intent into practice. Examples include the UN SDGs and the IMP’s five dimensions of impact. Though, as someone who previously worked in advocacy for a civil society organisation involved in the consultation process for setting the indicators, I’d like to point out that the SDGs might provide a strategic blueprint for prosperity and peace for people and planet. Still, they left out some of the most marginalised communities. For example, considering that a large proportion of UN member states still criminalise homosexuality and being queer, it’s no surprise that the goals largely leave out targeted provisions for the LGBTQ+ community. This guide from Stonewall International provides substantial insight into the challenges faced by LBGTQ+ folk across all 17 goals. It suggests practical actions to ensure any progress made towards the SDGs also meets the needs of LGBTQ+ individuals. Overall, alignment with the SDGs is relatively easy to demonstrate, but translating alignment into action requires more than a big vision. And to measure net-positive impact, we need more. And that’s where standards come in.
Standards: Standards refer to taxonomies or a set of core metrics applied to specific verticals and sectors. Standards help determine the type of data you want to collect and measure to validate and evidence your impact, and mitigate impact washing. Examples include IRIS+ Core Metrics Sets, the SDG Compass linking the SDGs to the Global Reporting Initiative’s (GRI) Sustainability Disclosures, B Impact Assessment and many more.
“No one line of inquiry and evidence is going to tell you everything. IMM should help you ‘manage forward’ to improve your impact over time, rather than just look back at what impact has occurred.” Steven Godeke & Patrick Briaud, Rockefeller Philanthropy Advisors, The Impact Investing Handbook
Depending on your fund’s investment thesis and asset class, standardisation might seem like a Herculean task. It can also reduce the precision of information conveyed about how exactly portfolio companies are achieving transformational impact. Aggregating information into one value doesn’t always capture the complexity of the impact achieved. For example, at the end of 2020, BGV’s portfolio companies positively impacted 17 million lives. But who are the 17 million people? Some were refugees and internally displaced people using Chatterbox to gain access to decent work, where otherwise they might not have earned an income due to their displacement. Some were young women who swapped clothes to reduce the excessive amount of clothing going to landfill through Nuw. And some were people accessing vital health services during a global pandemic with DrDoctor. No one line of evidence will tell you everything, which is why it’s important to always leave enough room for qualitative methods to highlight impact stories. Regardless of how you structure your IMM practice and deliver platform support for portfolio companies to report on specific metrics and outcomes, it’s important to remember that this is an iterative process that will eventually change and evolve.
Impact alignment in the investment process
Impact alignment in the investment process is possible at various stages. If you are an investor or someone looking to start investing in impact, I hope the resources and questions to ask yourself will help.
Investment and portfolio strategy: What is your fund’s investment thesis? How does it align with your investment model? What else in addition to capital can you contribute?
See examples here from Bethnal Green Ventures, Future Positive Capital, and Kapor Capital. BGV, for instance, mandates that a company embeds their social and/or environmental mission into the articles of association to ensure no mission-drift occurs as companies scale, similar to Obvious Ventures World-Positive Term Sheet. Incidentally, this is also one of the first steps a company has to undergo to certify as BCorp. We’re also conscious about the contribution we can make to our portfolio companies aside from providing capital. At BGV, we run a 12-week acceleration programme to help founders build and launch their tech for good businesses and provide further platform support to our portfolio teams. 34% of our portfolio companies believe that without BGV their product or service would not have existed today, and a further 50% believe that without BGV they would not have been as far along in taking their product to market. So if you’re setting up a fund, consider not only the capital you provide but also what type of non-financial support can help founders level the playing field in making their business a success. Take into account what value-add you can provide, especially if you’re supporting first-time founders navigating the murky world of investment or founders from marginalised communities, who often lack the networks to raise capital. At a minimum, your fund’s investment strategy and approach should clearly link intent to asset selection, which in turn is based on a credible investment thesis. The Impact Investing Handbook by Steven Godeke & Patrick Briaud is an excellent resource that takes you on a step by step journey to adopt an impact lens to your fund structure, approach and portfolio management.
Investment screening and due diligence: How do you screen deals for impact? How do you validate a company’s impact?
This is probably one of the harder ones for VCs, especially if you’re a fund at the earlier spectrum of early-stage investing (btw early-stage is a stupid, ambiguous term). At BGV, we initially screen investments based on whether they broadly align with the impact outcomes we seek in the world — A Sustainable Planet, A Better Society and Healthy Lives. We have a few dedicated questions at various points in the pre-investment and due diligence phases that are sector-agnostic but can be tailored to the respective businesses. Norrsken VC provide a really good example of how they screen for impact and sustainability at various stages of the investment process. Needless to say, that it is much easier to screen for impact the more mature companies are. But it’s often incredibly hard at the earlier stages because founders might not consider themselves to be purpose-driven or impact-driven. Thus, a screening process that ensures alignment with the impact outcomes you seek and clearly articulating your thesis is essential.
Investment management: How can you help founders understand how to measure impact effectively? How do you manage a fund’s impact performance?
There are numerous standards and measurement frameworks, and many more ESG frameworks are currently in development. It’s important to note that a different level of evidence might apply depending on the company’s maturity. Nesta’s Standards of Evidence is a helpful framework to assess at which stages we can expect varying levels of confidence as to how a company’s intervention has a positive impact.
For tech for good ventures, we also need to ensure founders and investors understand the potentially harmful (un)intended consequences of products and services. Doteveryone’s Consequence Scanning Toolkit and Omidyar Network’s Ethical Explorer are toolkits we use as part of our programme to help founders early on to think about responsible tech development and build a mindset where thinking about unintended consequences is not abstract. Instead, it’s focused to help founders assess risk levels and prevalence and what potential mitigation strategies they can deploy. We’ve also started reporting on our portfolio companies’ impact risks that arise from trying to mitigate and avoid possible consequences. My brilliant colleague Yumi, who leads on BGV’s insights and operations, shares guidance here on how to assess your portfolio’s risks of unintended consequences and examples of how BGV’s portfolio companies validate their impact in this blog. Crucially, you should consider applying the same rigour to ensuring accountability to impact targets and reporting results in the same way investment professionals do for financial performance.
Exit: What does a responsible exit look like?
It’s often hard to influence a company’s impact trajectory for early-stage investors who usually take minority stakes, which are then diluted in future funding rounds, just as the impact and ESG risks grow. For funds like BGV, it means ensuring we have robust processes in place to spot any potential risks and build great relationships with founders to ensure we can support their trajectory to conscious scaling where needed. So what happens if an exit is on the horizon? A growing number of VCs are integrating a sustainability clause into term sheets and shareholder agreements, and examples of responsible exits emerged, most notably from Rubio Ventures. It’s still a relatively unexplored area, so please leave a comment if you come across any other examples.
Debunking myths
So what are these common misconceptions we hear in the impact investing space time and time again?
Myth: Impact investing means compromising on financial returns.
Debunked: No-oh-oooh. Think of profit and returns on a large spectrum with different financial return expectations.
67% of respondents to the 2020 GIIIN Survey principally target risk-adjusted, market-rate returns, with the remaining respondents seeking either closer-market-rate (18%) or below-market-rate: closer to capital preservation (15%). Overall, 88% of respondents reported meeting or exceeding their financial expectations.
More and more impact investors are seeking to invest in companies with apparent ‘lockstep’ — which intrinsically links their purpose to their commercial success. In other words, as impact scales, as do the financial returns for investors. As a result, companies are increasingly aligning their impact in the same direction as their EBITDA. Many studies have proven that you can indeed invest in impact and achieve net-positive impact with significant commercial returns in various settings. For example, this study from the Morgan Stanley Institute for Sustainable Investing reviewed the financial performance of over 11,000 mutual funds from 2004 to 2018 and suggests there is no financial trade-off in the returns of sustainable funds. Instead, investing in socially responsible companies is more profitable than investing in traditional companies. The 2021 Net Impact Report by the Upright Project also suggests that “making a positive impact is definitely not at odds with making profits.”
Similarly, Cambridge Associates and the GIIN launched the Impact Investing Benchmark in 2015, the first comprehensive analysis of the financial performance of market-rate PE and VC impact funds with supporting evidence to the solid financial performance of these funds. The IFC provides further proof that you do not have to trade-off between impact and returns, highlighting that the IFC’s realised equity investment delivered returns in line or better than the MSCI Emerging Market Index from 1988 to 2016. Notable VC examples include Kapor Capital, who in 2019 reported a 29.02% internal rate of return (IRR) and 3x Total Value to Paid In (TVPI), which elevates them to the top quartile of VC firms, and BGV with a 1.9x TV/Cost.
So, let’s ditch this outdated perception of a trade-off between impact and profits and acknowledge that there is a broad spectrum of returns expectations in impact investing. Check out Omidyar Network’s whitepaper “Across the Returns Continuum” for further reading on this topic.
Myth: By virtue of seeking impact outcomes, you’re a good company.
Debunked: Ah hell no.
Theranos, I rest my case.
Just because a company is trying to make the world a better place doesn’t mean they achieve a net-positive impact. The net impact of a company is not only the result of positive impact outcomes but also operational. Good governance, fair, decent and equal work and many more factors play a huge role in helping companies on a responsible trajectory to scale. So where can you start? For any new companies, honestly, try out the B Impact Assessment. It’s a lengthy questionnaire, sure, but it will help you spot the areas where you might need improvement or significant support from your community.
Quibbling over semantics
“Amid so much suffering and injustice, we cannot resign ourselves to the reality we’ve inherited. It is time to reimagine what is possible.” Ruha Benjamin, Race After Technology
We put labels on things to try to differentiate and make sense of things in this world. But sometimes, it really doesn’t matter. Call it impact investing, call it responsible investing, call it value-aligned investing, call it what you want. It doesn’t matter. What does matter, however, is recognising the urgency with which we need to respond to the challenges of our lifetime and for future generations to come and put our money to work. | https://medium.com/@Dama_Yanthy/a-deep-dive-into-the-world-of-impact-investing-7b0dce0d3aa0 | ['Dama Sathianathan'] | 2021-08-17 12:27:36.867000+00:00 | ['Impact Measurement', 'Esg', 'Venture Capital', 'Impact Investing', 'Technology'] |
1,452 | The Learner Journey | Online courses are a journey on which learners willingly embark to empower themselves with valuable knowledge, so they may secure the personal, professional and academic success they aspire for.
It is not an easy journey. Will power & motivation are a must in order to reach the final destination — The success of completion.
Learners undergo several stages to complete an online course or specialization. As MOOC providers, we work hand-in-hand to help them reach this final destination.
Education providers can best support learners by keeping their needs and challenges at the forefront. When I entered the world of online learning at Edraak, it was clear that understanding learners inside out was the first step towards delivering personalized products with great end value.
When learners engage in online learning, their video viewership, assessment activity, and course surveys leave a trail of data, which educators can use to derive insights on their needs, challenges, and product satisfaction rate.
One way of doing this is to study learners’ activity data, which is stored in platform databases such as basic demographics, video and exercise activity. While this data lends a general overview, it still leaves many questions unanswered. For example, video engagement might tell us which topics a learner is interested in, but it won’t convey their real purpose for enrolling in a course. Low performance on a particular exercise shows us that learners are struggling at a certain point, but it doesn’t inform us what went wrong along the learning journey. There are other important open questions, like: How did they feel about the course structure design? Did they enjoy the activities? Is there something missing they want us to add in the future?
The most common practice to gather insights is surveys. However, a survey won’t deliver strategic insights unless designed and embedded carefully. At this juncture, we decided to assemble forces from different teams to determine:
The questions that uncover learner needs and goals.
The optimal way for an online learning platform to design and present surveys to obtain the most reliable feedback from learners.
The reporting mechanism of survey data.
The value of the answers returned to different teams.
Survey design
Which key insights should online educators look for?
To deliver substantial value to learners, we concluded the following insights were needed to map the learner journey:
Demographics: Understanding who our learners are by gathering data on their age, gender, occupation, and professional level enables us to cater better to our audience personas.
Understanding who our learners are by gathering data on their age, gender, occupation, and professional level enables us to cater better to our audience personas. Brand value and perception: Do learners trust the Edraak brand? What do they expect from it? What do they appreciate about the platform?
Do learners trust the Edraak brand? What do they expect from it? What do they appreciate about the platform? Drivers and goals: What are learners’ intentions and objectives? Why are they joining Edraak specifically?
What are learners’ intentions and objectives? Why are they joining Edraak specifically? Content satisfaction and experience: Are learners interacting with the different features on the platform? How do they rate the content quality?
Are learners interacting with the different features on the platform? How do they rate the content quality? Challenges and behaviors: Gaining insights on the platform learner experience, preferred learning styles, most-used features, etc.
Gaining insights on the platform learner experience, preferred learning styles, most-used features, etc. Outcomes (impact): To what extent do our courses help learners achieve their goals? Was their motivation/driver fulfilled? What real-life impact did Edraak have? What did Edraak help them achieve? (though we kept this for the future) .
Silent caveats!
Survey experts recognize the array of factors involved in survey design and that the methods through which responses are collected can affect their validity. Previously at Edraak, we embedded one survey at the beginning of the course and one at the end. This caused numerous issues:
Pre-course surveys led to misleading results as learners had not yet experienced the content.
Post-course surveys introduced a bias in results, as respondents were typically interested in the course. This meant we were missing out on those who dropped out and understanding their reasons for doing so.
How did we solve this issue?
We did something rather unusual. We designed a four-split survey to be filled in at the beginning of the course, at the end of week one, mid-way through the course and, lastly, upon its completion. After putting a lot of thought into this, we figured this was the best way to attain an overview of learners’ behaviors and course satisfaction.
Split 1:
To be placed after the first video in unit 1.
This group of questions focuses on learners’ demographics and motivations/goals.
This split targets all course learners before anyone drops out and seeks to gather information on their main goals and reasons for enrolling in order to measure their fulfillment at the end of the course.
Split 2:
To be placed at the end of week 1 in unit 1.
This group of questions focuses on learners’ experiences and satisfaction with the course content and features.
This split targets learners who have viewed enough of the course content to evaluate it. Since the dropout rate usually increases after week 1, it is imperative that we gain insights from the largest number of learners regarding the course content and features.
Split 3:
To be placed midway through the course.
This group of questions focuses on learners’ challenges, behaviors, and satisfaction, as well as on their perception of the brand value.
This split targets learners who are interested in the course and have formed a good overview of the course strengths and weaknesses. This split contains many open-ended questions as the input from learners at this stage is highly beneficial.
Split 4:
To be placed towards the end of the course.
This group of questions focuses on learners’ overall satisfaction with the course and platform, as well as gauges their willingness to enroll in other Edraak courses.
This split secures insights on learners’ overall course experience and can be linked to the previous splits to assess which learning goals were met or missed.
Tough-To-Design Questions:
As usual in surveys, we faced some difficulty designing questions for various reasons, such as the sensitivity of information. For example, we wanted to ask about the size of the company the learners worked at but were concerned that adding ‘small company’ as an option might cause some discomfort. Another concern was using words like small, medium, and large as they are subjective and might not yield reliable results.
How did we solve this? We designed answer options as per the below:
1–5 employees
6–20 employees
21–50 employees
51–500 employees
500+ employees
Visualization
Since the survey reports were meant as references, we decided to adopt an easy-to-understand infographic data visualization style using Google Data Studio. After blending data from the four splits, reports were produced for each course that contained the embedded surveys. | https://medium.com/edraak-engineering/the-learner-journey-608ecbdb7a04 | ['Rahma Atallah'] | 2020-12-27 07:43:42.936000+00:00 | ['Surveys', 'Education', 'Educational Technology', 'Technology', 'Edtech'] |
1,453 | There is nothing more ridiculous to me than hearing two people talk “at” each | Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore.
Good times and bad times. Happy times and sad times.
But always, life is a movement forward.
No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career.
https://gitlab.com/gitlab-org/gitlab/-/issues/285027
https://gitlab.com/gitlab-org/gitlab/-/issues/285028
https://gitlab.com/gitlab-org/gitlab/-/issues/285029
https://gitlab.com/gitlab-org/gitlab/-/issues/285030
https://gitlab.com/gitlab-org/gitlab/-/issues/285031
What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.”
1. Most people are scared of using their imagination.
They’ve disconnected with their inner child.
They don’t feel they are “creative.”
They like things “just the way they are.”
2. Your dream doesn’t really matter to anyone else.
Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you.
3. Friends are relative to where you are in your life.
Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends.
4. Your potential increases with age.
As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way.
5. Spontaneity is the sister of creativity.
If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment!
6. You forget the value of “touch” later on.
When was the last time you played in the rain?
When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby.
Do that again.
You will feel so connected to the playfulness of life.
7. Most people don’t do what they love.
It’s true.
The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same.
Don’t fall for the trap.
8. Many stop reading after college.
Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.”
9. People talk more than they listen.
There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again.
10. Creativity takes practice.
It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable.
If you want to keep your creative muscle pumped and active, you have to practice it on your own.
11. “Success” is a relative term.
As kids, we’re taught to “reach for success.”
What does that really mean? Success to one person could mean the opposite for someone else.
Define your own Success.
12. You can’t change your parents.
A sad and difficult truth to face as you get older: You can’t change your parents.
They are who they are.
Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door.
13. The only person you have to face in the morning is yourself.
When you’re younger, it feels like you have to please the entire world.
You don’t.
Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that.
14. Nothing feels as good as something you do from the heart.
No amount of money or achievement or external validation will ever take the place of what you do out of pure love.
Follow your heart, and the rest will follow.
15. Your potential is directly correlated to how well you know yourself.
Those who know themselves and maximize their strengths are the ones who go where they want to go.
Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future.
16. Everyone who doubts you will always come back around.
That kid who used to bully you will come asking for a job.
The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way.
Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help.
17. You are a reflection of the 5 people you spend the most time with.
Nobody creates themselves, by themselves.
We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them.
18. Beliefs are relative to what you pursue.
Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs.
Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative.
Find what works for you.
19. Anything can be a vice.
Be wary.
Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them.
Never mistakes, always lessons.
As I said, know yourself.
20. Your purpose is to be YOU.
What is the meaning of life?
To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece.
Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish. | https://medium.com/@toledovseasternmichiganliveonn/there-is-nothing-more-ridiculous-to-me-than-hearing-two-people-talk-at-each-1156eccdd87b | ['Toledo Vs Eastern Michigan Live Tv'] | 2020-11-19 00:03:49.390000+00:00 | ['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming'] |
1,454 | Intelligent virtual reality to support learning in authentic environments | Do you know how to improve learning through VR?
Next year, the number of VR users will grow to 171 million. VR market expected to increase to a $15.9 billion industry by 2019. This technology may be helpful for different areas including learning. A 3D generator of environments can be used to make education more efficient. VR technology is already used in hundreds of classrooms in the U.S. and Europe.
Support learning
Many learners in Primary school are struggling to reach appropriate levels of understanding and skills in basic subjects. Google has started to create VR-education that teaches using students’ preferences to play. Also, they made the Cardboard — VR viewer for mobile devices that allows exploring different countries in various centuries.
The company Discovr works on VR software, and they found that VR helped to retain knowledge 80% more than by using traditional ways to teach. Also, there is education app named AI that develops avatars and communicating as in social media to assess learners with questions that require searching the answers. Users get valuable feedback to help to cope with difficult issues.
Virtual reality also used to teach not only basic subjects but helps to explore different cultures, opinions and so on. For example, the company Diageo wants to teach through virtual reality about negative results of driving when you drunk. They suggest experience becoming a passenger seat of a car with a drunk driver by wearing a VR headset.
Educators have used VR tools to create recreations of historic sites and engage learners in subjects such as economics, literature, and history as well. Learners may receive transformative experiences through different interactive resources.
Use simulations to implement new skills
Virtual reality brings immersive experiences that create situations to which the learners would not have access. It helps to provide ‘what if’ scenarios to discover and manipulate aspects of virtual reality. Students learn to implement new skills and notice their weaknesses.
For example, a virtual submarine helps to discover natural processes that appear under the surface of a rock pool. Learners may explore things at a microscopic level, investigate ancient countries.
AI can supplement virtual reality, providing a possibility to interact with and give feedback to the learner’s actions as in real life. Intelligent Tutoring Systems lead the process as well, give advices for the students, ensure that they reach appropriate learning goals without becoming frustrated.
Apply virtual pedagogical agents
A social context will maintain cooperative activities and it will create better personal interactions. Virtual agents are aimed to build long-term cooperation with the learners. New technology allows creating such personal tutors to improve learning outcomes.
Virtual coaches are important elements in digital environments such as edutainment applications. They help to lead the learning process and motivate students. The virtual characters provide cognitive support, decrease learners’ frustration and act as educational companions. Virtual assistants may be present verbally, through video, as virtual reality avatars or artificial characters.
These coaches require human-like behavior with appropriate movements and gestures. Virtual agents need to support conversations and use speech to explain something. The researches proved that human voice helps to remember and understand better the material.
You need to use virtual characters for delivering guidance, not only for engagement. The pedagogical agents may educate how to cope with difficult situations as well such as bullying incidents. For example, FearNot is a virtual environment where students who have this negative experience play the role of an invisible advisor to a character in the play who is bullied. It helps to discover bullying issues and be prepared.
Many learners in Primary school are struggling to reach appropriate levels of understanding and skills in basic subjects. Google has started to create VR-education that teaches using students’ preferences to play. Also, they made the Cardboard — VR viewer for mobile devices that allows exploring different countries in various centuries.
Virtual reality also used to teach not only basic subjects but helps to explore different cultures, opinions and so on. For example, the company Diageo wants to teach through virtual reality about negative results of driving when you drunk. They suggest experience becoming a passenger seat of a car with a drunk driver by wearing a VR headset.
Virtual coaches are important elements in digital environments such as edutainment applications. They help to lead the learning process and motivate students. The virtual characters provide cognitive support, decrease learners’ frustration and act as educational companions. Virtual assistants may be present verbally, through video, as virtual reality avatars or artificial characters.
©Itsquiz — Be competent! | https://medium.com/age-of-awareness/intelligent-virtual-reality-to-support-learning-in-authentic-environments-263c14fa4640 | [] | 2017-02-13 11:56:33.367000+00:00 | ['Technology', 'Software', 'Education', 'AI', 'Virtual Reality'] |
1,455 | 5 Best Home Security Camera System | In 2020 security camera is very important for every home and office monitoring. If you want to secure your home by technology then a security camera is perfect for you. And also same job for your office, school, gym etc. So we can not think without a security camera for safety purpose. A security camera can solve our safety problem.
In this article i will show you the 5 best home security camera system for your security purpose. I have 6 years plus experience in security concern in this department.
Most Benefit of Home Security Camera System
Monitoring: You can easily monitoring your child or others family member at your home. You can also monitoring your office staff, what they are doing for your business. There are some featured for video calling and monitoring by using internet. Business Profit: Your business growth will increase after using security camera system. Because when you monitor your business place then security will very high. Never will lose anything without mention you. So a security camera impact on business growth. Low Cost: A security camera system can less your overall cost. If you use a security man for whole day then his salary would by minimum 5,00 per month but if you use a security camera system then cost is one time. Recording: Security camera system will record all the times. If you use a SD memory card then you can easily record a full month. So this is the very big opportunity when you using a security camera system.
# Night Owl Security Camera System
You might expect a technologically advanced home security company to start during a place like Silicon Valley or maybe Seattle, but nighthawk is headquartered in Naples, Florida, a town of about 19,000 residents known for having a higher-than-average proportion of millionaires. But Night Owl security camera are designed for everybody , not just those with hefty salaries.
The company’s employees and servers are based in sunny Florida, while the equipment is manufactured overseas. nighthawk markets its products to homeowners, business owners, and government offices alike. It also bills itself because the favorite wired security camera brand within the us . The night owl wireless 1080p smart security system is perfect for 2020. Many people are using this night owl security camera system for home or office monitoring for their strong technology features.
# PoE Security Camera Systems
POE means Power over Ethernet, poe security camera also popular in the United States. Many people are using this security camera in their outdoor monitoring like garden or front door security. You can also use in the road and top of the house for monitoring burglar.
Who doesn’t want the foremost efficient security camera system with most reliability, specially when it involves the installation of those cameras including wires for his or her functioning? Yes, right, everyone does want those best security cameras without much annoying with those complex wiring.
So, here we’ve all times best and supreme POE security cameras to assist you get the simplest performance with the foremost reliability with all the required info you would like to finalize your decision about POE cams.
I have done Best POE Security Camera Systems, which can assist you in getting the simplest residential PoE security camera or a number of these are often considered as among the simplest industrial PoE security cameras & for other purpose and places too.
Among all the kinds of surveillance systems that are available within the market, PoE Security Cameras are getting the foremost popular surveillance systems lately.
#PTZ Outdoor Security Camera
This type of PTZ Camera lens featured within the camera also will have an impression on resolution and field of view options. A lens with low focal distance number will cause a good field of view but less magnification while a better focal distance number will provide more magnification. The best ptz outdoor security camera is needed.
PTZ IP cameras, a bit like ordinary cameras, accompany different resolution capabilities, and if you select a high-resolution one, then PTZ is perfect for you further more may got to choose a high resolution megapixel lens in order that the resolution required to supply a transparent and detailed image are often maintained. If you mix a high resolution camera sensor with a less powerful lens, then your images won’t be as detailed and crisp as you’ll desire.
Conclusion
The best home security camera system is needed for every home owner for their home monitoring. Many more quality and types of security camera in the market for 2020. You will get different technologies security camera for different purpose. When you use security camera for home then IP, Battery Powered Wireless Outdoor Security Camera Reviews is the best. If you want to use for office or gym or school then ip or POE camera is the best.
Security camera is always needed for growth your business or secure home for anyway. So do not miss to use a modern or updated Best Solar Powered Security Camera Reviews. | https://medium.com/@getlockers1/5-best-home-security-camera-system-61d05e6a850b | ['Smart Locks'] | 2020-02-18 07:39:01.295000+00:00 | ['Cameras', 'Safety', 'Security', 'Security Camera', 'Technology'] |
1,456 | A Closer Look Into the RED Platform Ecosystem — How It All Functions | An ecosystem is usually defined as a large community of living organisms existing in a particular area. The living and physical components are linked together through nutrient cycles and energy flows. The RED Platform functions in a similar way. Only with electricity. Here’s a closer look!
The RED Platform — The Vision is Being Fulfilled
The RED Platform is a blockchain-based, decentralized energy trading software. Users are able to participate in peer-to-peer energy trading and operate transactions between consumers and energy producers. In order to incentivize the usage of renewable energy sources, individuals that buy renewable energy are awarded Green Certificates that can be sold for profit, also highlighting them as green energy users, interested in sustainability.
The Ecosystem’s Components Work For Its Users
The system behind it all is made of three key components that work together to bring you the energy democracy we envisioned at the start: the MegaWatt Tokens (MWAT), the RED Platform, and the RED Franchise.
Our RED ecosystem allows users to send and receive energy worldwide using the MWAT. The tokenized energy traded on the RED Platform can be physically delivered at local rates in countries with deregulated markets where Restart Energy is directly present or through one of our franchises.
The RED MWAT tokens are ERC20 utility tokens that give access to the RED Platform (RED-P) software and to the RED Franchise (RED-F). All the activity is measured by a real-time WiFi meter meant to record all the incoming and outgoing energy consumption and producing at the system level.
Access to the platform is granted if the user already has MWAT in their RED account. Large-size producers, including the ones that operate green energy (solar, wind, hydro, biomass), prosumers and small producers are selling energy into the system and receiving MWAT from the buyers, also members of the platform. The KWh received by the consumers is then tokenized and added to the user’s corresponding electronic wallet, storing their digital revenue for later use.
The RED Franchise is the first power retail franchise that makes it simple and easy to start and operate your own power utility company. If someone offered you the possibility of buying a McDonald’s franchise, would you refuse? Let us tell you more about how you can easily become a micro-entrepreneur in the renewable energy sector, be a responsible prosumer while spending time with the ones you love. Introducing the RED Retail Energy Franchise!
The RED Platform’s plan has three pillars through which we aim to build a global energy power supply platform: using blockchain technology, developing a peer-to-peer network and empowering the built of independent franchises, making way to the rise of the prosumer, and a world in which we all strive towards a more sustainable energy production and consumption pattern. Waste-based consumption hasn’t always been the case.
The tactics that derive from this vision deal with being able to further develop the RED Platform and expand globally. Also, we need to change the energy retail business model, by introducing the concept of the prosumer, incentivizing local entrepreneurs to start and grow their own small businesses, transactioning energy and being able to be compensated for their operations. By building the platform on the blockchain infrastructure, we help any registered user buy or sell from any other registered user, as well as encouraging the built of renewable energy stations, those acting like power generators that fuel digital society we live in, from households to corporations and technology start-up initiatives.
We strategized the transformation to happen on the blockchain. We imagined an e-business model that has transparency and safety at its core, incentivizing each user to be an active part of the community by rewarding renewable energy consumption with green energy certificates, therefore bringing change closer with each individual. | https://blog.restartenergy.io/a-closer-look-into-the-red-platform-ecosystem-how-it-all-functions-71d47289608d | ['Restart Energy'] | 2019-11-21 07:44:22.510000+00:00 | ['Green Energy', 'Franchise', 'Cryptocurrency', 'Blockchain', 'Blockchain Technology'] |
1,457 | A Recap on Technology in 2019 | The WIRED25 Conference this year focused on a theme familiar to everyone in the tech world: move fast, fix things. 22 founders, youth plaintiffs, professors, visionaries, filled the stage with eye opening discussions about hot topics: SECURITY, CLIMATE CHANGE, DATA. Want a quick skim of what top leaders are talking about in 2019? Here we go:
Jeff Weiner: LinkedIn, CEO
Working on:
Alleviating harassment in DMs, Breaking network biases (hiring only people similar to us, in our circles).
What to watch for:
The +1 Pledge — Create opportunities for people who aren’t just in your 1st degree network. Help someone, whether it’s liking a post or replying to a comment, and they will be able to influence others as well.
Anne Neuberger: NSA, Director of Cybersecurity
Working on:
Securing technologies and security standards.
What to watch for
Low orbit sensor platforms and weaponized drones, which both have very high cost to defend and low cost to deploy.
Brian Acton: Signal Technology Foundation, Executive Chairman
Working on:
Understanding how ads play in big corporations.
What to watch for:
Innovation in business models outside of ads.
Mihir Shukla: Automation Anywhere, CEO + Cofounder
Working on:
Software bots that go through hundreds, thousands of systems and make decisions for E2E processes.
What to watch out for:
Cutting time-consuming processes such as mortgage applications by half.
Dawn Song: Oasis Labs, CEO + Founder
Working on:
Trustworthy artificial intelligence.
What to watch out for:
Kara, a platform where you own your own health data.
Patrick Collison: Stripe, CEO + Cofounder
Working on:
The idea of “splitting” the Internet into blocks where each block agrees to the same rules.
What to watch out for:
Moving headquarters to South SF in 2021.
Chris Cox: Ex-Facebook CPO
Working on:
Acronym, a group that helps progressives build out their campaign and messaging tech stack. Planet Labs, a startup that builds satellites to track climate change.
What to watch out for:
“Tech can lead”
Vic Barrett, Kelsey Juliana, Levi Draheim: The Juliana Plaintiffs
Working on:
Juliana v. United States
What to watch out for:
Holding US leaders accountable for climate change.
Uma Valeti: Memphis Meats, CEO + Cofounder
Working on:
Growing meat from cells
What to what out for:
Being available for the public to try
Ben Horowitz: Andreessen Horowitz, Cofounder
Working on:
What You Do Is Who You Are (Published Oct 2019)
What to watch out for:
Companies are built on strongly defined culture and ethics.
Adam Mosseri: Instagram, Head
Working on:
“People first”
What to watch out for:
Private like counts rolling out to the US.
Anca Dragan: UC Berkeley, Assistant Professor
Working on:
Intentions and cooperation between robots and humans.
What to watch out for:
Robots that work with, around, and in support of humans.
Astro Teller: X, Captain of Moonshots
Working on:
Helping project teams navigate through bumpy waters on the way to reality
What to watch out for:
“Be passionately dispassionate”
This means to try things, even if it’s wrong. But when the time comes to evaluate the feasibility, results must be looked at dispassionately. This requires | https://medium.com/@katharinejiang/a-recap-on-technology-in-2019-1f3b73121d8a | ['Katharine Jiang'] | 2019-11-27 02:39:52.344000+00:00 | ['Technology', 'Conference', 'Thought Leadership', 'Silicon Valley', 'Startup'] |
1,458 | “Despite the bullying and occasional lunch meat dumped on my head, I never stopped obsessing about technology.” | “Despite the bullying and occasional lunch meat dumped on my head, I never stopped obsessing about technology.” Megan Morrone · Nov 16
Attention my nerd friends, there’s another blog on Medium that you need to follow right now. It’s written by one of us — a true geek from the time before being a geek was cool. Lance Ulanoff, the former editor-in-chief of PCMag.com, PC Magazine, and Mashable, has been writing product reviews on Medium for a while, but now he’ll regularly be blogging about the tech stuff that matters from the perspective of someone who has been around long enough to know. | https://debugger.medium.com/despite-the-bullying-and-occasional-lunch-meat-dumped-on-my-head-i-never-stopped-obsessing-about-f4f52268da93 | ['Megan Morrone'] | 2020-11-16 20:29:41.053000+00:00 | ['Technology', 'Nerds', 'Blog', 'Gadgets', 'Lance Ulanoff'] |
1,459 | Consider This, and This, and also This… | Photo by Rikki Chan on Unsplash
Beginning in 2005, and for almost every year since then, four couples gathered together hours before the new year would begin. We would sit around, have a few drinks, listen to some old and new music, and review the year. At some time during the evening we would bring out the pad and pencil and everyone would make predictions about the coming year. Every year we would put the crowdsourced wisdom into the same pitcher so we could check for accuracy the next year.
What has become noticeable over the years is how the nature of the predictions have changed. In the early days, we seemed to care about a lot of personal events and some trivial stuff, such as whose kid might get pregnant, which one of them would get a new job, which one of us would stop working, who would win the World Series and Super Bowl, and the level of the Dow. There were also more serious concerns, such as where and when the next terrorist attack would occur, or what kind of event would disrupt the world.
On the last day of 2007, two of the eight people predicted that the stock market was over-priced and would be headed down. They did not know it would be caused by a real estate derivative trading debacle. Also that year, so many of the predictions began to get political. We tried to predict who would get nominated in each party and who would win the election. On each question, one or two of the eight would have made the correct prediction. But no one person was consistently more clairvoyant than the others.
What we didn’t realize at the time, was that two years after we began making guesses the world changed drastically. None of us sensed the powerful effects of 2007. That was the year the iPhone was introduced. It was also the first year of Twitter, and Facebook was only three months old. By 2010 Netflix put Blockbuster out of business. Google was already ruling the net and Apple continued to turn out new phones, iPads, and other beautifully made products that nobody thought they needed until they were here.
This year we will put the sixteenth sheet of paper into the pitcher. A lot has changed during those years, especially in the group. All of us are in our mid-70s. Almost all of us now have some kind of heavy diagnosis, although we are all still functioning well. We don’t want to predict who, or when someone won’t be able to attend the next gathering. Perhaps, because of that, we all seem to be more interested in what the future holds for us, our kids, and the considerable number of grandchildren who have arrived during those years.
However, making predictions has become a much more difficult task. Even deciding on what to predict is confusing. In addition to that, since I retired about four years ago, I have been teaching Life-Long-Learning (old-age ) courses about the future. I have become very aware of how quickly the expectations of what is coming changes.
We are in a time of turmoil and flux. I expect that by 2030 some of that will have subsided and our paths forward will become clearer. In ten years we will have a better understanding of how to adjust to the many changes that are constantly occurring in how we live, work, and play. But the transition from now to then will not be smooth. There is a great deal that needs to be sorted out, decided upon, and possibly regulated. One question is who will do the regulating?
The obvious place to start is with politics and economics. Here in the US, we are going through a major transition. Many of us feel that the election of Joe Biden over Donald Trump can give us a chance to recover our ideals and values. We hope we can begin to chart a more stable future. Of course, not everyone feels that way. Trump himself is still muttering that he was robbed, and many of his fans believe at least some of that.
Are we at a tipping point? Can President-elect Biden make everyone feel safe again? Will there be a new era of prosperity, equality, and creativity, or will Mitch McConnell and 52 Republican Senators succeed in stopping most of what Biden wants to do, and little will change?
But before I, or anyone else makes any predictions, other major factors need to be considered. Many of those factors can be seen right here on this “Prediction” site. New ideas and new technologies are being developed every day, several a day, actually. And the technology that is being produced is being used to produce more technology that is faster and smarter. Just look at the posts that have been put up recently. They describe possible advances in superconductors, the use of disinfectant tunnels, synthetic biology, and diluting blood to reduce the effects of aging. These kinds of discoveries, inventions, and interventions, and so many others like them, are going to have as big an effect on everyone’s lives as all the political maneuverings and shenanigans. Most of the politicians don’t seem to understand what is going on, and how swiftly things are changing.
People all over the world, whether they realize it or not, are already becoming very dependent upon technology, especially the biggest technology marketing and selling companies such as Facebook, Google, Microsoft, Tesla, Amazon, and Apple, along with several Chinese companies such as Alibaba, Huawei, Tencent, and Baidu. I am sure there are others out in the rest of the world that I am missing due to American ignorance.
These companies already have the capacity to do things that a government is supposed to do, and they are more skilled at using technology to do it. I am referring to things like controlling traffic, building an international wi-fi system from satellites, planning trips to other planets, building and regulating self-driving vehicles, or running a healthcare system. They already know more about each of us than we do ourselves. They have sophisticated Artificial Intelligence programs that predict what we want, when we need it, and how much we are willing to pay for it. That can include our medicines, as well as our favorite craft beers. What will be the influence of these companies over the next decade?
Right now, the influence of these companies, and of many other advances in technology are almost completely unregulated. Will Facebook change the algorithms of its feed to support efforts to reduce the destruction of our climate? Will TikTok dance moves become more popular than American football, and inflict a different kind of brain injury? Will a twelve-year-old “influencer” still have marketing power when she turns eighteen? Will CRISP-R developers only work to prevent genetic diseases or will they try to breed a cross between Michael Jordan and Stephen Hawking? Will AI pick out the brightest students before they are five-years-old and send them to special schools? Will it be the government or Google that uses location tracking and facial recognition to follow where we go and who we talk to?
As one of our former Secretary of Defense has stated, “there are known unknowns, and there are unknown unknowns.” That is certainly the case now. Any attempts to prepare ourselves for what is coming have to consider all of these things. We must also realize that there are many more that we don’t even know about. That makes all of this difficult to do.
If that is not enough, there is an additional problem. Right now, there seems to be about 25–30% of Americans who don’t believe that any of these new developments matter. Many don’t believe they are real, and there are many people in important positions telling them that all the scientific advances, and all of these new technologies are part of a secret plot to control their lives. They are being told this by people who are heavily invested in keeping things the way they are. These are people who are resisting changes in order to maintain their power and money.
Sadly, that includes a lot of people. Recently, I read that 63% of the jobs that existed in 1950 are gone. Most have been replaced by other kinds of jobs. But further replacement seems to be in doubt. This is making a lot of people anxious. It makes them fear change.
The next decade will define how everyone in the world will live for the next fifty to a hundred years. I have observed from working with people for the last fifty years is that our lives are much more intertwined and interconnected than they were fifty years ago. We need to change the mindset that each individual has to take care of him/herself and family. We need to learn the skills of better communication, cooperation, coordination, and negotiation. So far, judging from how we have dealt with the virus, we are failing in our attempts. Will we learn from our failure?
So, here is my prediction: by 2030 the future course of human events will be much clearer. At that time things will be much better for almost all of us, or only a few of us. Which way will it go? I’m not ready to take a stand yet. If we go by history, things look grim. As Rana Dasgupta describes our past, sooner or later, the very rich get to regain control of the levers of power. He traces that trend from the Magna Carta of 1215, and the execution of Charles I, in 1649, to the present.
But maybe we can really create a new kind of society. We are better equipped to do that now than any time in history. We can use all of our new technologies to help us model and analyze what decisions will be beneficial, and which could be problematic. However, we first have to reach some agreement about what is beneficial and what is problematic. That won’t be easy.
That is why I am posting this here. Maybe, together we can all help clarify things. Maybe we can find a way, as is the mission of this publication, to make what we want to happen, happen. Crowdsourcing has proved to be very successful. Maybe we can do more than just predicting, we can find solutions and methods to get things accomplished. If the editor allows, I will continue to post my thinking about what is coming and why.
I am hoping that some of you will respond. What kind of future would you like to design? How possible is that? What do you think some of the major determinants will be? And also, get to work on that blood revitalization treatment. I turn seventy-six in March. The cancer is all gone, but my knees hurt. | https://medium.com/predict/consider-this-and-this-and-also-this-ff6b355b625b | ['D J B'] | 2020-12-06 03:24:09.504000+00:00 | ['Medicine', 'Predictions', 'Politics', 'Technology', 'Hope'] |
1,460 | 3 Ways to Implement the Singleton Pattern in TypeScript With Node.js | The Problem — Logging Example
Here’s an example problem: I have a Node.js app for payment processing that uses a Logger class. We want to keep a single logger instance in this example and ensure the Logger state is shared across the Payment app. To keep things simple, let’s say that we need to ensure that the logger needs to keep track of the total number of logged messages within the app. Ensuring that the counter is tracked globally within the app means that we will need a singleton class to achieve this.
A high-level diagram of the sample app by the author.
Let’s go through each of the classes that we will be using.
Logger class: Logger.ts
A basic logger class that allows its clients to log a message with a timestamp. It also allows the client to retrieve the total number of logged messages.
Payment class: Payment.ts
The Payment processing class processes the payment. It logs the payment instantiation and payment processing:
The entry point of the app: index.ts
The entry point creates an instance of the Logger class and processes the payment. It also processes the payment through the Payment class:
If we run the code above, we will get the following output:
# Run the app
tsc && node dist/creational/singleton/problem/index.js
Output screenshot by the author.
Notice that the log count stays at 1 despite showing 3 logged messages. The count remains at 1 because a new instance of Logger is created in index.ts and Payment.ts separately. The log count here only represents what’s logged in index.ts . However, we also want to include the number of logged messages in the Payment class.
Here are different ways to solve this problem by using a singleton design pattern. | https://medium.com/better-programming/3-ways-to-implement-the-singleton-pattern-in-typescript-with-node-js-75129f391c9b | ['Ardy Dedase'] | 2020-11-02 16:29:10.810000+00:00 | ['Technology', 'Startup', 'Software Development', 'JavaScript', 'Programming'] |
1,461 | Checking File Path and Manipulating Files with Python | Photo by Michal Ico on Unsplash
DataSeries highlight:
Python is a convenient language that’s often used for scripting, data science, and web development.
In this article, we’ll look at how to check for path validity and read files with Python.
Checking Path Validity
There’re several ways to check if a path exists in Python. They’re the following:
exists — returns True is a path exists and False otherwise
— returns is a path exists and otherwise is_file — returns True is the path exists and is a file and False otherwise
— returns is the path exists and is a file and otherwise is_dir — returns True is the path exists and it’s a directory and False otherwise
For instance, we can use them as follows, given that we have the following Path object:
from pathlib import Path
path = Path('./abc/')
Then if /abc/ doesn’t exist, then running path.exists() returns False .
We can check if a path is a file by calling the is_file method as follows:
from pathlib import Path
path = Path('./foo/bar/abc.txt')
is_file = path.is_file()
Given that we created the ./foo/bar/abc.txt file, is_file should be True .
Finally, we can use the is_dir method as follows:
from pathlib import Path
path = Path('./foo/bar/')
is_dir = path.is_dir()
Then given that we created the ./foo/bar/ file, is_dir should be True .
The File Reading and Writing Process
We can use the Path object to write content to a file.
For instance, we can run the following code:
from pathlib import Path
path = Path('./foo/bar/abc.txt')
path.write_text('abc')
The code above will write abc to a file given that ./foo/bar/abc.txt exists.
We can read the content from a file into a string with the read_text method as follows:
from pathlib import Path
path = Path('./foo/bar/abc.txt')
content = path.read_text()
Then given that ‘./foo/bar/abc.txt’ has the content abc , read_text returns that and set it to content , so content ‘s value will be 'abc' .
Opening Files With the open() Function
We can use the open function to open a file. To do this, we can pass in a Path object into the open function as follows:
from pathlib import Path
file = open(Path('./foo/bar/abc.txt'), 'r')
By running the code above, we get a filehandle that we can use to read and write to the file depending on what permission we pass into the second argument.
In the example above, we opened it with read-only permission since we passed in 'r' .
Reading the Contents of Files
With the filehandle, we can call read on it to read the file.
For instance, we can write:
from pathlib import Path
content = open(Path('./foo/bar/abc.txt'), 'r').read()
to get the content of the abc.txt file. Given that we put abc in the file, then content should be 'abc' since we call read with the string.
We can read a file line by line rather than the whole file with the readlines method. This is useful for reading big files since we don’t want to read the whole file into memory.
For instance, given that our ./foo/bar/abc.txt file has the following:
abc
123
def
foo
bar
Then we can write the following code to read in the content line by line with readlines :
from pathlib import Path
file = open(Path('./foo/bar/abc.txt'), 'r')
for line in file.readlines():
print(line)
Then readlines will read the file one line at a time.
Photo by NeONBRAND on Unsplash
Writing to Files
We can use the open function to open a file and write to it.
For instance, we can use it as follows:
from pathlib import Path
with open(Path('abc.txt'), 'w+') as file:
file.write('abc')
The code above will write to the abc.txt file and create it and write to it if it doesn’t exist.
We use the with keyword to open it and then clean up automatically when writing is done.
Then abc.txt should have abc as the file content.
The code above is equivalent to:
from pathlib import Path
file = open(Path('abc.txt'), 'w+')
file.write('abc')
file.close()
Other permissions that we can pass into the 2nd argument includes 'a' for appending to a text file and 'w' for writing to an existing file.
Saving Variables with the shelve Module
We can use the shelve module to open a file and then save a variable with its value in the into a file.
For instance, we can write:
import shelve
shelf_file = shelve.open('data')
shelf_file['fruits'] = ['apple', 'orange', 'grape']
shelf_file.close()
The code above will save the value of the 'fruits’ key, which is [‘apple’, ‘orange’, ‘grape’] into a file called data .
It’s a binary file, so we have to read it back with the shelve module as follows to retrieve the data:
shelf_file = shelve.open('data')
print(shelf_file['fruits'])
Then given the data file we saved before, we’ll print out the value of the 'fruits' key, which is [‘apple’, ‘orange’, ‘grape’] .
Conclusion
We can check for file path validity with the methods in the Path object.
To open a file, we can either read in a text file with the read_text method of the Path object or the open method.
The readlines method is used to read a file line by line.
write_text is used to write text to a file. Also, we can use the write function to write to a file. | https://medium.com/dataseries/checking-file-path-and-manipulating-files-with-python-8c964f3a6d48 | ['John Au-Yeung'] | 2020-04-27 10:27:39.436000+00:00 | ['Technology', 'Software Development', 'Software Engineering', 'Programming', 'Python'] |
1,462 | My steps to find a great job fit | Nowadays I have been seeing an increased variety of jobs in Tech. With so many options and all those amazing job descriptions sometimes I get afraid of applying to a new role to find out, later, that it wasn’t what I was looking for.
When I mentioned that with my therapist we started working on my insecurities and all the disappointment I already faced in my 5+ years of experience. Because of that, I was able to come up with some key strengths that I value most in a company and what strategies I could follow to make sure that it would be a good fit for me.
As I realized already, I value informal workplaces where people enjoy their working days and companies in which the product has a good quality. With that being said, I came up with a few topics to check during the interview process and how to find the answers.
Positive feedback from employees
The first thing that I look at about a company is the work environment. Is it formal, or informal? What are the complaints about the managers and directors? Do people have fun working there?
I use to say that the tech world is small. Working as a Software Engineer in Portugal allowed me to meet many people from different places in the world. Some of them are Brazilians that worked with me in the past, or that we just talked once in a conference or even people that interviewed me for positions that turned out not to be a good fit. There is always someone that knows someone else who works for that company. Go on, hit send in that message to request feedback about the environment.
A good place to have a broader picture of the environment is Glassdoor.com. I look at all the cons from recent comments (last 3 months) for a pattern. Usually, companies with bad environments are marked with several comments about how bad the management layer is, about how the company values HIPPO (highest paid person’s opinion), or the long hours that the employees have to make.
With feedback from both current and past employees and all the reviews from Glassdoor, you probably already have enough information to decide on whether to keep evaluating the company or not.
Strong engineering team and culture
Strong engineering culture is almost impossible to achieve when a company has a high employee turnover rate. You could search on Linkedin for people who work on a specific company, looking for someone who works in there for more than 1 and a half years. Use this search also to identify the work experience of the team. Talk with a friend about this company.
A different source of information is the company career page. Usually, they have videos and information about core values, benefits, and the environment. Sometimes you can even find a blog with more information about solutions for tech problems they had or information about the company’s growth and strategies.
One question that I started to ask in every interview is about the support for engineers to attend conferences. If there is no budget to help the team to attend good conferences, I don’t believe that the engineering will be using the best practices and tools. Attending conferences also helps disseminate the culture of sharing, which I consider to be very important.
A product that customers love
If you ever used a terrible product or service, you probably ended up cursing the company and speaking terrible things about it. Do you want to be the engineer behind it? I know I don’t.
The best way to know if customers love the product is to use it yourself. Download the app, create an account on the service, and explore it for a while. This approach probably won’t work for a B2B company, so try to look on its website for current customers to see if you know the brands that work with them.
A data-driven company
Being data-driven, to me, means that the work I’ll do has a high probability of being meaningful. I want to understand the problems I’m trying to fix and how it changes the customer’s life or work. To do so, a company must have a strong Data team and the decisions must be based on key metrics.
During interviews, ask about how the engineering team creates a backlog. Where does the work come from? If the conversation does not start with key metrics to guide the company and never mentions metrics at all, this is a red flag. Probably the backlog comes from top-down requests or the sales team. There are definitely exceptions on this, just keep in mind that if being data-driven does not show up in mid-term goals, it may never be prioritized at all.
Freedom to make technological decisions
What is the role of a Software Engineer in that company? Engineers works as executors (they just type code) or they act like owners? The second is usually associated with better tools and freedom to choose your battles.
This is directly related to the way I like to work. I’m not a lonely wolf, but I need to have some time on my own to analyze metrics and document flows to only then work with the team to propose a solution.
I want to be able to develop scalable and maintainable code whenever possible. To do so, an engineer must have the freedom to make decisions. Should we build a new service? Is there any part of the system that needs refactoring? Having a non-technical person telling the engineering team how to do their jobs will not give you time to reason about technical debts or quality.
On an interview ask about how they ensure code quality. I expect to hear about a CI that runs at least test and linting. Low test coverage is also a red flag, it tells me that the company cares more about delivering everything on time than delivering parts of it with good quality. If it is an old codebase, the team may not be worried about refactoring existing code.
Opportunity to make an impact on the business
By having data to support your ideas and the freedom to put them into practice you will realize, in the end, that you have the right conditions to cause some impact in the business. You will have a say in business decisions.
I usually find it very difficult to have an impact on a large organization, especially if your team takes care of such a small piece of the product. But working in a smaller scope gives you different opportunities. You’ll start working on scalability problems and performance.
This can be easily identified in an interview by asking about the team structure and their scope. You can also ask about the current challenges that this team is dealing with right now and what are the plans for the next 12 months. | https://medium.com/@leandro.gomes/my-steps-to-find-a-great-job-fit-a4e356a00e88 | ['Leandro Gomes Da Silva'] | 2020-12-22 21:29:22.893000+00:00 | ['Job Hunting', 'Careers', 'Technology', 'Software Development'] |
1,463 | 10 Invaluable Lessons on Digital Transformation in 2019 | Real talk: every company is currently tackling digital transformation in their own way. Love it or hate it, it’s happening. In fact, chances are that the company you work for, or want to work for, has plans for it. This is an exciting time to see these technologies play out, and last month, I got to see it first-hand at PLDT’s Philippine Digital Convention 2019.
The energy at the event was infectious and I found myself marveling at the pioneering digital solutions and innovative business concepts shared by industry leaders and digital experts. Envisioned with the theme “Come to the Edge”, the convention was all about the impact of technology on businesses, the state of communication, and ideas that unlock the doors to a successful digital transformation.
I personally found the idea to be very thought-proving, so I asked myself, “What does it mean to come to the edge?”. Based on the speakers and delegate-discussions, this is what I gleaned:
Tech has accelerated the pace of change and brought us to the edge. What does this entail? High performance, low latency, edging out competitors, bolder decisions, focusing on proximity, pushing the limits, living on the edge.
Now, coming down to the meat and potatoes of it all — here are the 10 invaluable lessons I picked up at #PHDigicon2019:
“We don’t have a choice on whether we digitally transform; the choice is how well we do it.” — Erik Qualman, Motivational Keynote Speaker and #1 Bestselling Author
Picture taken by Charu Misra of Erik Qualman at #PHDigicon2019
2. “With 2019 forecasted to see huge milestones in the tech and business landscapes, many analysts predict that this year will be the beginning of the 4th Industrial Revolution.” (due to advancements in AI, IoT, machine learning etc.) — Jovy Hernandez, SVP & Head, PLDT & Smart Enterprise Business Groups
3. Tech is not designed to replace face-to-face interaction. It’s to help when time and distance are an issue.
4. Job Displacement: Cisco conducted a study based on 275 million full-time equivalent workers employed in the six largest ASEAN economies and found that 28 million of those workers are going to be displaced. Innovations in digital technology will move workers from what they are currently doing to what is required in the future.
Picture taken by Charu Misra of Naveen Menon at #PHDigicon2019
5. Word of Mouth is now World of Mouth. The businesses of yesterday had to rely on customers spreading their message via verbal waterfall. Increased communication and the internet have allowed that process to occur on a global scale, at lighting speed. What was once word of mouth, is now world of mouth. We actually have that speed!
6. Gone are the days of photo copies and binders stuffed in chunky drawers and cabinets. Currently, we’re in a situation where we’re seeing everything move to the cloud. In order for enterprises to keep up, you absolutely need software-defined WAN (SD-WAN).
7. Did you know that it would take Niagara Falls 210,000 years to use up one quintillion gallons of water? If that surprises you, get this: there are 2.5 quintillion bytes of data being generated every single day(!!) So what we need is an edge in analytics to compete and succeed in the digital age.
8. Trans-humanism: the possibility of fundamentally improving the human condition beyond its current physical and mental limitations, especially by means of science and technology. | https://medium.com/the-looking-glass/10-invaluable-lessons-on-digital-transformation-in-2019-cf5f31f08e0a | ['Charu Misra'] | 2019-08-13 04:12:32.045000+00:00 | ['Artificial Intelligence', 'Technology', 'Analytics', 'Digital Transformation', 'Digital Marketing'] |
1,464 | 10 Year Plan | Almost 30 years later I’m still pursuing that initial entrepreneurial inspiration.
You may have heard about Misty Robotics.
Maybe you’ve heard our vision: To put a personal robot in “every” home and office.
I want to be absolutely clear about our intention with this statement. We define “every” as at least >80% of all homes and offices — in other words, on par with penetration of cars, televisions, personal computers, and smartphones.
This sort of adoption will take time. Likely decades. A phenomenal result would be to penetrate more than 10% of homes and businesses within 10 years.
The point is, it’s a massive undertaking.
However, we are hardly alone in this mission. Many of “the Big 5,” or well-funded startups, or aspiring entrepreneurs have a similar vision.
Some sibling strategies we see at work in the market today are:
The “iRobot” strategy: make single-task robots that do tasks amazingly well for consumers and add multi-purpose capabilities over time. The “Amazon” strategy: make robots with ears and mouths and, over time, add eyes, then mobility, then hands, then legs to become fully multi-purpose. The “Softbank” strategy: make very expensive full-featured multi-purpose robots and over time make them more affordable. The “Sphero” strategy: make toy robots that are not sophisticated and, as technology matures, add sophistication. The “Mayfield Robotics” (aka Kuri) strategy: make robots with a few “killer use cases” that consumers will want today and then open them up to become multi-purpose over time.
And then there’s the Misty Robotics strategy: make the very best multi-purpose robots available to programmers and makers (a.k.a. dreamers, inventors, creators) and enable them over time to add tens of thousands of skills to make them eventually useful to office and home consumers. Similar in some ways to Amazon’s Alexa strategy (open APIs for programmers to make skills) and different (we include makers and robots are eventually useful to millions).
The Misty Robotics strategy is essentially the personal computer strategy. And the Internet strategy. And the Web strategy. And the AR/VR strategy. Each of these technologies, before they became mainstream (or will become in AR’s case), were expensive, cumbersome, arcane, and for “experts only”. Then, Apple and 3Com and Mosaic and Oculus came along with:
Well-integrated, mass-manufactured offerings
That did most of what those inventors wanted them to do
of what those inventors wanted them to do In very easy-to-program ways
For an affordable price
Our 10-year plan is a simple one.
Phase One: Unleash the inventors (year one to three)
We will make products that finally make the promise of robots accessible to every programmer and maker — having learned lessons from the tens of thousands who use Sphero’s SPRK. No longer does one need to be an electro-mechanical wizard who spends 6–12 months assembling “their unique robot” and then a few more months programming it to, maybe, move from point A to point B reliably. No longer does one need to beg to have some “compute time” on the industrial robot or the retail robot at the office. No longer does one have to save up the price of a new car to get their own. No longer does one have to stare at the ugliest pile of parts on the planet (that usually doesn’t have “eyes”, or “ears”, or a “mouth”).
“If you could program a robot to do something in 30 minutes; would you?” We think most will say a resounding yes!
Instead, we’re humanity’s number one fan: we believe in the the inventiveness of the dreamers. All one has to know is how to program a web site (JavaScript, Python), a mobile app (Java, Objective C), a PC app (Java, C#, etc…) to be able to program an almost fully-formed robot.
It’ll be as easy as:
function RobotGo () { GetMap (); Navigate (pointX, pointY); DoAnything (); }
Phase Two: Enable Early Adopters (year three to year six)
Now that tens of thousands of enterprise programmers, entrepreneurial programmers, makers, job seekers, and fun seekers have built and published useful, fun, practical, or ingenious uses for a personal robot, early adopters will see that there’s a robot for them that is appealing.
Early adopters aren’t interested in having all ten thousand skills at their fingertips — they’re interested in having a few dozen that make sense. They want to save money. They want to save time. They want to avoid the mundane so that they can have more time for the joyful. They want to be safe. They want their loved ones to be safe. We’ve done the hard work so that they can achieve these benefits.
And, now, hundreds of thousands of inventors are creating skills for Misty Robotics’ robots. At scale this looks like the computer industry’s independent software vendor (ISV) with hundreds of thousands of software programs or the smartphone market with millions of apps.
By this time, our robots will be more advanced in their learning capabilities, in their physical attributes and in their diversity. All of this will be driven by the multitude of skills the inventors create and share with other Misty Robotics’ users in our marketplace.
Phase Three: Cross the Chasm (year six to year ten)
Early adopters and the company have now shown the “early majority” that robots aren’t the risk they fear today — that they can fulfill a wide array of human needs — in the office and in the home — easily. Crossing the chasm now means handling all of the important practicalities that come with owning a robot — where can I easily see them to evaluate them (mass retail)? Where can I get them serviced if there’s an issue (the local electronics repair shop)? Where can I see the company’s track record with respect to protecting privacy and security (on the web site, in the robot itself)? Is the company legit? Who are the competing personal robots and are they any good?
And, because more inventors continue to pour into the marketplace, the Early Majority understands this is no passing fad. That, instead of a few dozen useful skills for this one robot at this one price, there are now hundreds of skills for their specific situation. Now we’re talking!
Our robots are starting to learn from each other so they’re even smarter and more skilled. They’re made for all types of environments and uses (indoor, outdoor, hot, or cold).
And, still, inventors continue to invent. Misty Robotics’ robots are finally becoming like colleagues, friends and family members — fun, useful, interactive, evolving.
Onward
Why did we choose to share this when so many of our competitors will now understand our plan? Because our plan is much more about you than them. This team has shipped over three million robots while working at Sphero; we know how to “do robots.” We’re obsessed about delivering on this promise and we’re convinced that if we stay focused on delivering a robot that every programmer and maker in the world can do fun and useful things with, then we’ll be on our way.
Sure, other companies might try to copy this. And, sure, maybe there are already some out there with the same strategy and approach. We’ll take our chances on putting our team on the field and compete with the best of them. | https://medium.com/mistyrobotics/10-year-plan-24d278479ceb | ['Misty Robotics'] | 2020-03-09 18:16:02.283000+00:00 | ['Robotics', 'Future Technology', 'Robots', 'Strategy', 'Tech'] |
1,465 | From 9–6 to 24–7–365. | From 9–6 to 24–7–365.
These are the numbers that represent our work-life mentality and technology’s capability.
Photo by Fauzan Ardhi on Unsplash
Technological platforms do not need a break. We do.
We need to strike a balance. We have to start thinking about ways to leverage technology, so our work continues to progress without our actual presence.
In short. Automate. Then take a break. | https://medium.com/technology-hits/from-9-6-to-24-7-365-e938ee60ec71 | ['Aldric Chen'] | 2020-12-16 06:17:16.227000+00:00 | ['Life Lessons', 'Work Life Balance', 'Technology', 'Productivity', 'Automation'] |
1,466 | The Rise of B2B eCommerce and Digital Logistics in Indonesia | Singapore, July 2020
Indonesia’s B2B eCommerce and digital logistics sector has become Southeast Asia venture capital’s new hunting ground. There were around 16 deals announced just in 1H 2020 totalling at least US$145m (1).
Why are investors ploughing in capital now into an historically overlooked sector? We briefly explore some of the driving factors.
Starting from a low penetration rate, B2B eCommerce is at an inflection point and looks set for exponential growth. Similar to B2C, which went through an accelerated adoption phase with several unicorns emerging (e.g. Tokopedia, Bukalapak), B2B is now set to catch up with a massive market to capture.
B2B in Indonesia accounts for less than 50% of all eCommerce activity whilst in similar markets it accounts for over 70% (2).
Frost & Sullivan has forecasted a 59% CAGR between 2017–2022 for Indonesian B2B eCommerce sales, around two times the growth rate for B2C eCommerce over the same period.
Indonesian MSMEs (Micro, Small and Medium Enterprises), which account for 57% of GDP, are ready to come online in droves powering the shift from offline to online for the B2B sector.
A CLSA Survey of MSMEs in the retail industry found that 90% of “Mom and Pop” retailers would like to source goods online, however, only 18% do so currently. Start-ups such as Bizzy Digital, an offline-to-online technology platform, have built products centered around offline distributors and offline retailers which allow them to make a gradual step-by-step shift into the online universe.
While the rise of online in B2B retail has for some time seemed an inevitability in Indonesia, its timeline has been accelerated by COVID-19. The epidemic has forced changes in buying behaviour, with physical channels feeling the effects of new social distancing norms and lockdowns.
When operating at scale, B2B eCommerce can drive sustainable growth and incremental profitability. Compare this with B2C eCommerce, the rise of which was typically fuelled by heavy discounting and subsidies offered to an often fickle customer base.
The time to scale in B2B is potentially longer, but once the customers are “defaulted” into a B2B platform, they are committed for the long-term resulting in very high customer lifetime values.
In addition, monetization avenues for B2B platforms are much more diverse than for B2C and do not majorly rely on a simple commission model on sale of products. B2B platforms can make money from a range of ancillary offers including financing, insurance, data and insights, merchandising, promotion and ads, and digital products. There is evidence in similar markets that the revenue contribution from value-added services is often higher than sales commission fees, potentially leading to a shorter path to profitability when compared with B2C eCommerce.
The embedded finance opportunity in B2B eCommerce and digital logistics is particularly significant. Besides the large digital payments opportunity, there is a US$166b estimated credit gap for Indonesia’s MSME segment. Traditional banks, with an exception of Bank BRI, have largely stayed away from the MSME segment, which has traditionally been characterized by a lack of conventional collateral and no banking history.
Novel fintech platforms (e.g. Investree, Modalku) are able to cut through the red tape by leveraging data and technology to offer right-sized products for MSMEs including inventory financing, digital merchant cash advances, invoice factoring, and buyer financing, amongst others.
Such fintech platforms have established synergistic partnerships with B2B players, with the fintechs providing the know-how, financing, and digital payment capabilities and the B2B players opening up their large MSME customer base and sales data, resulting in higher customer acquisition and retention rates for both types of platforms.
A large part of the US$240b Indonesian logistics sector is ripe for efficiency gains from digital disruption as B2B eCommerce grows. Last-mile solutions have improved significantly alongside the B2C eCommerce revolution. However, as B2B commerce grows so does the need for efficient first-mile and mid-mile solutions.
This space is large and highly fragmented with multiple intermediaries leading to over-investment, underutilization, unreliable service levels, and high costs. A number of quality start-ups are arming themselves with substantial capital as they finally look to solve these long-term structural issues.
For example, Kargo Technologies with its “uber for trucks” model raised a US$31m Series A in April 2020, and Waresix operating an integrated warehousing and trucking fulfilment platform is reportedly close to finalising a US$50m round (3).
We can confidently say the B2B eCommerce and digital logistics in Indonesia are finally coming of age supported by strong long-term fundamentals with an accelerated timeline due to Covid-19.
We expect the amount raised by early-stage operators in this space 2H 2020 to exceed the amount raised in 1H 2020, with several high-quality start-ups in the market raising funds.
Watch this space.
Shauraya Bhutani — North Ridge Partners
Singapore, July 2020
Note: NRP has existing business relationships with companies mentioned in this post.
Sources: Macquarie, VentureCap Insights, Crunchbase, CLSA, Frost & Sullivan, DealStreetAsia
(1) US$145m is the cumulative amount from 11 announced deals as five out of the 16 announced deals did not disclose the amount of the raise.
(2) B2B eCommerce sales in India, China, and Thailand accounted for 93%, 72% and 73% respectively of total eCommerce sales.
(3) DealStreetAsia reported on 30 June 2020 that Waresix is close to raising a US$50m round (Link) | https://medium.com/@shauraya-bhutani/the-rise-of-b2b-ecommerce-and-digital-logistics-in-indonesia-65295b803d6f | ['Shauraya Bhutani'] | 2021-03-24 10:26:39.638000+00:00 | ['Technology', 'Indonesia', 'Ecommerce', 'B2B', 'Southeast Asia'] |
1,467 | Spark on Kubernetes: Integration Insights from Salesforce | Kubernetes is arguably the most sought after technical skill of the year and has seen a 173 percent growth in job searches from 2018 to 2019. As one of the most successful open source projects, it boasts contributions from thousands of organizations that ultimately help businesses scale their cloud computing systems.
Many large enterprises have adopted Kubernetes, including Salesforce. Salesforce implemented this open-source platform to integrate with analytics engine Apache Spark. Salesforce has been working with Spark running on Kubernetes for about three and a half years.
As Salesforce Systems Engineering Architect Matt Gillham explains in his Spark on Kubernetes webinar, he and his team began working with the analytics engine back in 2015.
“We wanted the ability to provide Spark clusters on demand, particularly for streaming use cases. We were looking at other uses for the engine as well.”
The first use case the team took on was processing telemetrics and application logs, streams that ran through a set of Apache Kafka clusters.
“We wanted to create a separate Spark cluster for each of our internal users to maximize the isolation between them. So, we had some choices to make about what cluster manager to use.”
The team considered several, with Yarn, Kubernetes, and Mesos emerging as the top candidates. Spark had first-class support for Yarn and Mesos so there were tradeoffs to evaluate.
“At the end of the day, we chose Kubernetes largely due to the backing and momentum it had and who was behind the project. And it seemed to be the most forward-looking cluster manager for cloud-native applications.”
While the team focused on the success of its initial internal customers, it also prototyped some other big data systems running atop Kubernetes. Of course, simultaneously, the open source community was continually refining Kubernetes. One community project involved making Spark natively compatible with the Kubernetes scheduler, to eliminate redundancy between the function of the Spark master components and what the Kubernetes scheduler and control plane could manage.
Matt and his team learned a lot from these first efforts, and that knowledge has guided and shaped their subsequent work with the Spark and Kubernetes platforms. In the Spark on Kubernetes webinar, Matt digs into some of that hard-earned knowledge. He explains in detail why:
Distributed data processing systems are harder to schedule (in Kubernetes terminology) than stateless microservices. For example, some systems require unique and stable identifiers like ZooKeeper and Kafka broker peers.
Kubernetes is not a silver bullet for a failure tolerance. (Hint: It’s really important to understand the lifecycle of a Kubernetes pod. We wrote more about building a fault-tolerant data pipeline here.)
It’s in everyone’s best interest for Salesforce to align its efforts with Kubernetes to those of the broader Kubernetes community. As relatively early adopters of Kubernetes, Salesforce’s Kubernetes problem-solving efforts sometimes overlapped with solutions that were being introduced by the Kubernetes community.
Matt and his team are continually pushing the Spark-Kubernetes envelope. Right now, one of their challenges involves maximizing the performance of very high throughput batch jobs running through Spark.
“We’re constantly discovering new issues running Spark at scale, solving them, and contributing those solutions to the Spark and Spark Operator projects.”
To learn more about our work with Spark on Kubernetes, watch the webinar.
And if you’re interested in working with a company that likes to push the envelope with all its software technologies, please check out our open roles in software engineering. Join us and engineer much more than software. | https://engineering.salesforce.com/spark-on-kubernetes-integration-insights-from-salesforce-335285bd8e82 | ['Shayna Goldfarb'] | 2019-09-03 15:36:01.291000+00:00 | ['Technology', 'Data Processing', 'Apache Spark', 'Open Source', 'Kubernetes'] |
1,468 | It’s a real thing… | Marvel Fans Unite With Trekkies Against Space Force Guardians | News Break
Last year, the US Space Force logo was touted as a rip off of the emblem for Starfleet Command. Screenshot by author… | https://medium.com/technology-hits/guardians-of-the-galaxy-meet-space-force-guardians-b982b5a95881 | ['Tree Langdon'] | 2020-12-21 20:50:48.051000+00:00 | ['Self Improvement', 'Technology', 'Space', 'Entertainment', 'Science'] |
1,469 | How should you kick start your career in Machine Learning? | What is machine learning?
Machine learning is an application of artificial intelligence (AI) that renders systems the ability to automatically learn and gain from experience without being explicitly programmed. Machine learning directs toward the advancement of computer programs that can obtain information and use it to learn for themselves.
The value of machine learning can be realized when we recognize how clearly machine learning techniques can be applied to solve problems that appear remarkably complicated, for instance, face recognition, you would understand that ML algorithms can tackle several apparently complex problems as long as there is adequate data.
Let’s go deeper into how machine learning acts
Machine learning (ML) is broadly categorized into two divisions — supervised and unsupervised.
Supervised algorithms comprise a data scientist/data analyst who has an intellectual machine learning experience and can give precise data. Data scientists/Data Analyst experts are quite proficient to estimate the data to develop predictions.
Unsupervised algorithms are further known neural networks, which links millions of instances regarding training data and automatically recognizing similarities within numerous variables.
Here are a few steps to learn Machine Learning:
1. Programming Skills- There exist varied languages which render machine learning capabilities. Also, there exists development activity proceeding at an accelerated pace across various languages. Currently “R” and “Python” are the most generally used languages also there is sufficient support/community available for both.
2. Learn fundamental Descriptive and Inferential Statistics- It is good to have an understanding of the descriptive and inferential statistics before you begin serious machine learning development.
Descriptive statistics supply information that specifies the data in some manner.
Inferential statistics uses data from a sample and performs inferences regarding the considerable population from which the sample was extracted. Because the intent of inferential statistics is to bring resolutions from a sample and conclude them to a population, we demand to have a belief that our sample perfectly exhibits the population.
3. Data Exploration / Cleaning / Preparation- What discriminates a good machine learning expert from a normal one is its quality of feature engineering and data cleaning that happens on the primary data. The increased quality time you contribute here, the better it is. This process likewise catches the amount of your time and therefore it assists to establish a structure encompassing it.
4. Introduction to Machine Learning- There are several sources accessible, to begin with, Machine learning techniques. I would recommend you to choose one of the following two steps depending upon your way of learning:
The first choice has to be learning through books. There exist multiple editions accessible which remain outstanding, to begin with. These are few of the proposals which frame an important compilation of introductory texts, incorporating statistical learning, the theoretical underpinnings of machine learning.
Nowadays there are various courses available moreover these are some reliable means to kick start your machine learning adventure. Both students and professionals will hold an advantage over all other aspirants if they leverage this degree or certification.
5. Advanced Machine Learning- This step will stay mostly masked if you choose the certification programs but if you are learning from books then these are some fresh topics you will have to study thoroughly. These topics include:
Deep learning, a subset of machine learning, utilizes a hierarchical level of simulated neural networks to drive out the process of machine learning. These artificial neural networks are created like the human brain, besides neuron nodes joined collectively like a web. While conventional programs build analysis with data in a linear process, the hierarchical role of deep learning operations allows machines to process data among a nonlinear approach. A classical approach to identifying fraud or currency laundering may rely on the quantity of transaction that happens, while a deep learning nonlinear technique would combine time, geographic position, IP address, sort of retailer and also other features that are likely to lead to fraudulent activity.
Ensemble Modelling is a robust method to increase the performance of your model. It normally pays off to implement ensemble learning over and beyond various models, you might be developing. Studying this is where a master can stay differentiated from a normal professional.
Machine Learning including Big Data, Since you know that the volume of data is rising on an exponential pace but raw data is not beneficial till you start acquiring insights from it. Machine learning is nothing but learning from data, produce insight or recognizing a pattern in the accessible data set. There are various applications of machine learning.
6. Gain Experience. Work On Real Projects: Once you’ve acquired a stable hold covering all the technical aspects of Machine Learning, it’s time to get forward to the field. Exhibit yourself to the industry and attempt to find genuine data science projects on the Internet algorithms like “fraud detection”, “spam detection”, “recommendation system”, “web document classification”, and many more.
The field of Machine learning is evolving rapidly nowadays with the application of intelligent algorithms being implemented from apps to emails to as far as marketing campaigns. What this implies is that machine learning or Artificial Intelligence is the modern in-demand career option you can prefer.
However being a new field relatively, you may have several doubts and confusion as of how you can actually make yourself to choose Machine learning as a profession. Let’s consider over some things you need to master to get your career in machine learning startup.
Understand the field first: It is an explicit but significant fact. Understanding the theory of machine learning and fundamental math behind it simultaneously with some alternative technology while also having hands-on expertise with the technology is the solution to dive into this field at first. Covert problems in Mathematics: Possessing a perceptive mind is crucial in machine learning. You require to remain prepared to blend technology, analysis, and math collectively in this field. Your focus on technology must be strong and you must maintain curiosity with the openness toward business obstacles. The ability to proclaim a business problem into a mathematical one will take you deep into the field exclusively. Gain knowledge of the industry first: Machine learning, like every other industry, holds its individual freakish requirements and intentions. Hence, the more you examine and learn regarding your desired industry, the bigger you’ll achieve here. You have to study the primary and everyday functioning of the industry simultaneously with all its technicalities included in it. Background in Data Analysis: An experience in data analysis is excellent for transitioning or growing into machine learning as a profession. An analytical attitude is essential to achieve in this field, which indicates one has to possess the capacity to reflect over reasons, consequences, and willingness to seek for the data and digging into it, understand the functioning and its outcomes.
The above-given steps are some ways to start a career in Machine Learning.
After graduation, students can opt to pursue careers in artificial intelligence or machine learning, for example as, | https://medium.com/quick-code/how-should-you-kick-start-your-career-in-machine-learning-a89f3ad348b | [] | 2019-05-27 17:51:02.828000+00:00 | ['Robotics', 'Programming Languages', 'Artificial Intelligence', 'Machine Learning', 'Technology'] |
1,470 | Atari | It would not be an exaggeration to say that encountering the Atari ST began my career as a technology writer.
Though I’d played with Sinclair Spectrums and Amstrad PCs in the 80s, I had never used a computer as a production tool. My undergraduate dissertation was written in longhand and I paid a lady with a royal family anniversary commemoration plate to type it up, for example.
I came home in 1989 with my 2.1 in Communication Studies and no job. My younger brother Jason had an ST up in his attic bedroom. He played games on it — a lot. I wasn’t interested in the games though. I was interested in the demos — these strange proof-of-concept showcases of imagery and music put together by enthusiasts. It was something beyond my capability — I’ve never been much of a programmer. But I was fascinated by the combination of skills required; technological savvy and creativity.
I got interested in a form of music making that went hand in hand with the demo scene called tracking. Music Trackers were tools you could use to create music with short samples, using none-standard notation. Their lineage stretched back to the punch-card, player pianos of the 1800s…
Illusion Software’s Quartet was not quite a tracker, but it shared some of their characteristics. Made by two University of York undergraduates, it was a four channel, stereo sample sequencer that was unique in that it used standard musical notation. +Rob Povey designed the amazing, intuitive interface. To this day, I’ve yet to use a piece of music composition software with a better notation system.
I recently got in touch with Rob’s partner in creating Quartet, Kevin Cowtan, who found ways around the ST’s rudimentary sound system as co-programmer. I gushed about Quartet — a programme that changed the way I thought about computers and that sucked up my life for almost two years. He said he’d almost forgotten it.
Getting the sounds they needed out of the ST was quite an engineering feat.
“We wrote our own version of the synth code setting the beep frequency to maximum (well above audible range) and using the volume settings to create an analogue output. We created several very highly optimised implementations in assembler — to do this we basically memorized the number of clock cycles required by every 68000 instruction and carefully picked the instructions to optimize performance.”
I recorded hours of music using Quartet. Samples careful selected, looped and tuned as instruments — then assembled into “voicesets”. The last of it in the mid-90s on an Atari STe. Some of it still listenable. It ignited an interest in digital creativity that lead to the career I have today.
I recently wrote about the ST for Stuff magazine — here’s a link to it. | https://medium.com/spodgod/atari-1bfb6a2ceeeb | ['Karl Hodge'] | 2017-03-23 10:55:09.182000+00:00 | ['Technology'] |
1,471 | Who is TheLedger and what do we do besides writing blogs and doing hackathons | and writing blogs about doing hackathons.
😲 Who are we
We’re a blockchain consultancy start-up from Belgium (& The Netherlands). We’re currently 9 large with 8 people located in Belgium and 1 in The Netherlands.
We are part of a larger group called Cronos Groep. It is a network/ecosystem of businesses. It’s a framework which helps entrepreneurs to build out their business. They provide services like fleet and HR so start-ups like us only have to focus on their core business. Cronos Groep has holdings in more than 370 companies in various sectors and is actively involved in the start-up of some 20 companies per year.
Within this group, we belong to a smaller group called IBIZZ. IBIZZ stands for IBM, Open source and Innovation. This is where our love for Open source innovative technology comes from. We’re agnostic, but this is why we also have some IBM in our veins.
💪 What we do
We’re consultants at our core. We provide our blockchain expertise to other companies. This expertise goes from Hyperledger Fabric, Ethereum, BigchainDB, Stellar to Hyperledger Sawtooth, IOTA,… from analysis to development. There are so many cool distributed ledger technologies out there, this is why we try to spread our knowledge an try to be technology agnostic. Besides regular consultancy, we also felt the need to provide help creating and auditing Token sales.
We also give blockchain awareness sessions and workshops to help the companies we work for to further grasp the potential of blockchain in their business.
🎒 What we’ve done
You can always find an updated list on our website https://theledger.be/projects. But we’ll include some links here as well.
Hackathons
Hack for Diamonds 2018 — Winners blockchain challenge
Blockchaingers hackathon 2018 — Winning the “Digital nations infrastructure” track
Some of our projects
Please get in touch if you’re looking for a technology partner yourself.
Greencards — B-Hive
By placing insurance (green) cards on the blockchain we are able to digitally give access in a controlled manner while speeding up the manual process of requesting access.
Competencies on the blockchain — GO & VDAB
Providing a Bring-Your-Own-Standard platform to map competencies in different standards from different companies. This to ultimately help people without the correct diploma’s to get their job using achieved competencies.
KYC on the blockchain — B-Hive
Providing a universal platform to validate KYC identities.
Smart contract audit — Inwage
Ethereum token sale contract audit. | https://medium.com/wearetheledger/who-is-theledger-and-what-do-we-do-besides-writing-blogs-and-doing-hackathons-2810f420d01a | ['Jonas Snellinckx'] | 2018-06-29 08:58:06.689000+00:00 | ['Blockchain Startup', 'Blockchain', 'Blockchain Technology', 'Hyperledger', 'Ethereum'] |
1,472 | Why Cities are crucial for Climate Change and fixing it! | *Cities consume 80% of the global energy supply and account for roughly an equal share of the global greenhouse gas emissions.
Cities are geographical nodes which concentrate wealth, people and productivity (Hoornweg, Freire, Lee, Bhada-Tata, & Yuen, 2011). Cities exist because they are economic agglomerations of talent, capital and labor that make possible production and trading at economies of scale and specialization of work. Cities are fundamental to how we organize our economy, create value and create employment. The emergence of a market economy and the industrial revolution paved the way for factory cities aided by many innovations in manufacturing and transportation. Manufacturing created demand for skilled craftsmen, managers and high energy consuming production processes. Higher density of resources agglomeration paved the way for dense transportation networks which included roads, canal ways and railroads. Housing for city inhabitants and their density in a geographical region made economic sense to provide common services such as electricity supply, water and gas for indoor heating, sanitation and other services essential for quality of life (O’Sullivan, Urban Economics, 2009). Although many cities in the United States have shifted away from manufacturing to reinvent themselves to be centers that service the creative and knowledge economy, many factors of agglomeration dynamics mentioned above have intensified rather than become obsolete. The creative economy attracts high paid talent and skill that continue to demand more and more luxury services from the cities they live in shifting the city administers to think about many cosmopolitan services to attract more talent and youth. While these factors act as a loop to increase demand for more and more focused energy consumption inside cities, cities and their density of talent, wealth and connection are our best hope of finding solution to mitigating and adopting to climate change.
For the first time in history, more than half of the world’s population is now urban. Economists believe this trend of urbanization is the defining phenomenon of this century. Most of the population in industrialized countries is urban. This is a trait shared by many countries in Latin America while many developing countries in others regions are following this path (Hoornweg, Freire, Lee, Bhada-Tata, & Yuen, 2011). Credible studies have found that cities consume 80% of the global energy supply and account for roughly an equal share of the global greenhouse gas emissions (UNEP, 2012) (UN-HABITAT, 2012). Roughly half this amount of GHGs come from burning fossil fuels for urban transportation and urban housing and buildings energy end-in use appliances are responsible for the other half (Williams, 2007). This simplifies the problem in a way that is manageable with the proper policies and technologies in place. Energy experts agree that electrification of end user services in transportation, buildings and industrial sectors as one of the most viable means to deep de-carbonization of the urban energy system and the economy. For this to work the electricity system will have to be decarbonized as well (National Renewable Energy Laboratory , 2017).
One of the founding objectives mentioned in the 1992 Article II of the United Nations Framework Convention on Climate Change (UNFCCC) is to ensure stabilization of greenhouse gas concentrations in the atmosphere to a level that would prevent “dangerous anthropogenic interference with the climate system”. It also mandates that such stabilization “should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change… and to enable economic development to proceed in a sustainable manner” (United Nations , 1992 ). This objective in itself defines the two pathways to how the world and local communities can think about the problem of climate change. The first part refers to mitigation whereas allowing ecosystems to adapt natural to climate change refers to adaptation. Subsequent publication has included adapting human socioeconomic processes to withstand disruptions from a changing climate. Climate change mitigation are the efforts to reduce or prevent emission of greenhouse gases into the atmosphere as well as enhancing sinks to greenhouse gases. Adaptation is defined as the “adjustment in natural or human systems in response to actual or expected climatic stimuli or their effects, which moderates harm or exploits beneficial opportunities” (IPCC, 2007). The fundamental difference to these efforts are that mitigation measures are global since all nations and communities exist in one common atmosphere whereas adaptation measures should be local depending on the different nature of vulnerabilities and exposure of different communities in different parts of the world. While developed nations/ cities and more recently China and India are the main contributors to climate change, developing nations suffer the most from the impact of climate change.
Cities are naturally vulnerable to climate disasters and increased risk of these occurrences because of the density of activities. They are concentrated centers of people, assets, and economic activity (Baker, 2012). In the developing world this vulnerability is more so because of urban informality and exclusionary policies that some cities have pursued in urban. Many incoming migrants to cities have sought shelter in favelas or informal housing. Most of the time these informal housing and urban poor communities spring up in sub optimal land which may have been demarcated by city authorities as unfit for human settlement because of low lying flood prone areas or bad state of soil conditions or poor access to sanitation and other services. In such a context, an increased occurrence and increased severity of natural disasters in the form of urban flooding, hurricanes or sea level rise can mean disastrous to the urban poor living in these communities. In New York city mayor Bill de Blasio speaks of the step his administration has taken to ensure safety of poor urban communities living in Manhattan region to prevent storm surges during hurricanes and long terms sea level rise. Even New York city becomes vulnerable since it is a coastal city. Kelley (2014) analyses the trends and roots of the Syrian civil war and concludes the relation between long term climate disasters in the fertile crescent and poor management of the migration and youth to the cities and urban centers from rural farming communities in Syria (Kelley, 2014).
In the recent weeks in 2017 the world has seen multiple 500/ 100 year storms and hurricanes closer to the continental United States. The hurricanes Irma, Harvey and Maria affected large areas in Texas, Florida and Puerto Rico. City infrastructure, water, sanitation and electricity delivery services were destroyed in a matter of minutes. The difference in responses between Florida, or Texas and the response in Puerto Rico was not only the poor the state of infrastructure preparedness, but also the poor state of rapid response and disaster preparedness. A major factor in being prepared for disasters is advanced planning and having a trained rapid response team. Generally, a storm situation such as hurricane Maria will increase competition for resources. Thus it is essential to plan ahead and include securing shelter, food, first aid, shower and toilet facilities and other essential facilities for crew who will be working around the clock for days (EEI, 2014). This is why accountable and transparent governance as well as planning ahead is important for cities. This was also a good example where cities and city utilities pooled resources and provided mutual assistance were able to respond rapidly where isolated cities such as San Juan, Puerto Rico could not. Higher awareness of climate disasters and consequences from climate refuges and political instability is bound to bring countries together, including the United States to share knowledge and voluntarily to agree to curb emission. Its interested to see that many cities and mayor in the U.S. have voluntarily declared that they will standby the Paris Climate Change agreement.
Walter Vergara from the World Resources Institute is positive that many cities are taking innovative leadership for climate change mitigation. LAC cities such as Curitiba implemented the Bus Rapid Transit (BRT) system for the first time and has gained popularity across the world because of its relatively short costs and short periods of construction relatively to rail transit. Bio fuels research is also where Brazil is a leading clean energy innovation in the world. LAC region is also innovating in niche areas such as electric mass transport where the rest of world has not progressed much. In many ways the LAC region has a comparative advantage in cleaner transportation modes.
In conclusion, cities are part pf the problem as well as an integral part of the solution to climate change. Cities are a hot bed of talent innovation and capital. Cities and their density makes economic scale and scope possible for new clean energy innovation to be commercialized and tested out in a profitable manner. Cities are also dynamic hubs of knowledge and association such as the C40 that have come together to be a force for sharing adaptation and mitigation knowledge with the rest of the world and the developing world. There is no doubt that cities will be agents leading the force against climate change now and in the future. | https://tisuragamage.medium.com/why-cities-are-crucial-for-climate-change-and-fixing-it-b44b9469e82c | ['Tisura Gamage'] | 2019-07-29 18:53:23.479000+00:00 | ['Technology', 'Climate Change', 'Cities', 'Innovation', 'Density'] |
1,473 | Geography and controversy. | Pacific Ring of Fire. The huge length of the ring makes it possible to find the most successful place in the implementation of the green hydrogen project associated with the use of geothermal energy. The criteria for evaluating such a location for a new project are: proven reserves of geothermal energy, availability, proximity to markets, attitude of the local administration, competitive environment, etc. There are not so many locations that meet most of the criteria. The study of each such location, discussion in the specialised community and on discussion platforms, the publication of the results of such studies will allow us to formulate plans for the necessary actions for the practical implementation of the project and determine realistic steps. These are powerful arguments in increasing investor confidence in the project; special attention should be paid to a huge number of digital technology activists who can become the main investment component. Discussion of the potential of each location based on freedom from political preferences, territorial differences. The project as an opportunity to unite the disparate efforts of specialists from many industries, different countries for the practical implementation of a green hydrogen project using geothermal energy and digital technologies. | https://medium.com/@effectivelybe/geography-and-controversy-b3213c9adab0 | ['Effectively Be'] | 2021-03-09 23:04:04.056000+00:00 | ['Green Energy', 'Blockchain Technology', 'Green Hydrogen', 'Blockchain', 'Ecology'] |
1,474 | THE FUTURE OF SMART FARMING | WatchNET IoT devices provide reliable, flexible, and more efficient environmental monitoring solutions to master growers’ greenhouse owners.
Our dashboard includes E-Map and graphs gives great data visualization for quick actionable decisions for the best results.
Soil Moisture, Soil Temperature, Electrical conductivity of water, Co2, Light, Air quality, Water presence of dryness, wind pattern and many other variables can be monitored. | https://medium.com/@sarunaa01/the-future-of-smart-farming-ca672dafaa85 | ['Arunaa S'] | 2020-12-17 13:44:31.111000+00:00 | ['IoT', 'Iot In Smart Farming', 'Internet of Things', 'Iot Platform', 'Farming Technology'] |
1,475 | What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” | Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore.
Good times and bad times. Happy times and sad times.
But always, life is a movement forward.
No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career.
https://www.deviantart.com/ncflive/commission/FREE-Ram-R-Salisbury-J-vs-Krawietz-K-Mies-A-Live-Stream-Free-1410812
https://www.deviantart.com/ncflive/commission/STREAMING-Krawietz-K-Mies-A-vs-Ram-R-Salisbury-J-Live-Stream-Fr-1410814
https://www.deviantart.com/ncflive/commission/LIVE-Krawietz-K-Mies-A-vs-Ram-R-Salisbury-J-Live-Stream-1410816
https://www.deviantart.com/ncflive/commission/Watch-Ram-R-Salisbury-J-vs-Krawietz-K-Mies-A-Live-Stream-free-1410817
https://www.deviantart.com/ncflive/commission/StreamS-watch-Krawietz-K-Mies-A-vs-Ram-R-Salisbury-J-Live-Stre-1410818
What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.”
1. Most people are scared of using their imagination.
They’ve disconnected with their inner child.
They don’t feel they are “creative.”
They like things “just the way they are.”
2. Your dream doesn’t really matter to anyone else.
Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you.
3. Friends are relative to where you are in your life.
Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends.
4. Your potential increases with age.
As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way.
5. Spontaneity is the sister of creativity.
If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment!
6. You forget the value of “touch” later on.
When was the last time you played in the rain?
When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby.
Do that again.
You will feel so connected to the playfulness of life.
7. Most people don’t do what they love.
It’s true.
The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same.
Don’t fall for the trap.
8. Many stop reading after college.
Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.”
9. People talk more than they listen.
There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again.
10. Creativity takes practice.
It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable.
If you want to keep your creative muscle pumped and active, you have to practice it on your own.
11. “Success” is a relative term.
As kids, we’re taught to “reach for success.”
What does that really mean? Success to one person could mean the opposite for someone else.
Define your own Success.
12. You can’t change your parents.
A sad and difficult truth to face as you get older: You can’t change your parents.
They are who they are.
Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door.
13. The only person you have to face in the morning is yourself.
When you’re younger, it feels like you have to please the entire world.
You don’t.
Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that.
14. Nothing feels as good as something you do from the heart.
No amount of money or achievement or external validation will ever take the place of what you do out of pure love.
Follow your heart, and the rest will follow.
15. Your potential is directly correlated to how well you know yourself.
Those who know themselves and maximize their strengths are the ones who go where they want to go.
Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future.
16. Everyone who doubts you will always come back around.
That kid who used to bully you will come asking for a job.
The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way.
Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help.
17. You are a reflection of the 5 people you spend the most time with.
Nobody creates themselves, by themselves.
We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them.
18. Beliefs are relative to what you pursue.
Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs.
Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative.
Find what works for you.
19. Anything can be a vice.
Be wary.
Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them.
Never mistakes, always lessons.
As I said, know yourself.
20. Your purpose is to be YOU.
What is the meaning of life?
To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece.
Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish. | https://medium.com/@ramsalisburyvskrawietzlive/what-nobody-ever-tells-you-though-when-you-are-a-wide-eyed-child-are-all-the-little-things-that-600aa43def46 | ['Ramsalisbury J Vs Krawietz K Mies A Live Tv'] | 2020-11-19 17:42:18.591000+00:00 | ['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming'] |
1,476 | Koss KPH30i Headphones Review | The best-selling headphone in Koss’s lineup is the classic Porta Pro. First launched in 1984, it has a lovely and timeless retro design, mixing a fully customizable fit with the sumptuous sound of their iconic 60ohm speaker drivers. You’ll see its praises sung far and wide, and even I got in on that action.
Not a company to waste the engineering they put into one of the most celebrated headphone drivers of all-time, every few years Koss releases a new design based around that classic sound goodness. While some of these revisions, including the Koss UR40 I reviewed recently, go part of the way towards recapturing the magic of the Porta Pro design, only 2004’s KSC75 has ever drawn as much acclaim as the original.
It’s time to add a new entry to that list.
OVERVIEW
Koss’s KPH30i launched in 2017. Their regular price is just $30. And they often go on sale for less than that.
At $30, if you have any interest in them whatsoever, they are perhaps the safest blind buy in audio. Once I saw Metal571 raving about them on Twitter, I practically knocked over my furniture in my rush to buy one.
Or I would have, if I hadn’t already been sitting at my computer.
The KPH30i’s are an open back on-ear headphone. You can get them in “Black” and “White” variants. The “Black” Model is really a dark grey with black accents, and the “White” model has blue accents. There’s also a limited edition version that shows up every once in a while with one red pad and one blue pad inside a beige chassis, for a throwback feel.
They have a 4-foot permanently attached cable with a high-quality 3.5mm plug, an in-line mute button and basic microphone, and a brand new suspension headband and auto-adjusting ear cup design created by Koss just for this model.
Those design improvements and some small tweaks to the audio are what elevate this to the level of its celebrated predecessors. And perhaps, beyond them.
SOUND QUALITY
Just like the Porta Pros, the KPH30i’s have a warm signature, but with just enough detail retrieval to avoid any of the bad things associated with a bass-focused response.
Lows are powerful and creamy. There’s a slight emphasis in the midbass, and they of course don’t have the same sub bass punch that a closed-back headphone would provide, but they deliver far more quality bass than something this small has any right to.
Mids are basically perfect to my human ears. Instruments and vocals sound very natural, and are forward enough in the mix to not get drowned out under the bass.
Treble takes a bit of a back seat, with a sudden shot coming out of nowhere depending on what you’re listening to, and nothing even approaching the fatigue level. Highs have never been the strength of this driver, and the same ragged but gentle treble response shows up here to do its duty competently.
Still, for $30 the level of sound here is remarkable. Compared to the Porta Pro, it’s just a touch more relaxed and neutral. The Porta Pros are more “fun” to my ears, and both are great for extended listening sessions thanks to their gentle treble response. Most music/audio lives in the midrange, and it’s here that the KPH30i delivers just as well as headphones costing many multiples of this price.
ISOLATION/SOUNDSTAGE
There’s no isolation here whatsoever! So, in spite of Koss advertising these as a portable headphone, they aren’t the best portable listening experience. They’ll let in all the sound from your surroundings, and if you push them past moderate volumes, sound leak is also quite prominent.
That didn’t stop me from doing my usual coffee shop listening test, which I don’t totally recommend. The warmth will help mask your surroundings if you push the volume up, but I don’t like to listen long at high levels.
On the plus side, these have a generous soundstage, just like other more expensive open back headphones with decent sound signatures. They have a nicely pleasant out-of-the-head listening experience without the congestion and intimacy of studio-style closed headphones, and you’ll be impressed at their ability to image and separate.
COMFORT
Sublime.
I like the overly adjustable fit of the old Porta Pro design, but it takes a little work. You’ve got to expand and contract the metal headband bits and adjust the “comfort zone” temple pads for a while to get a perfect fit. Its circular ear pads also won’t always line up perfectly with your ears.
The KSC75's use an ear clip design which is easier to fit quickly onto the ear, but the hooks might irritate the skin of some users, and if you wear glasses like I do, they’re a little harder to use for long sessions. They also retain the same circular pad and housing of the Porta Pros.
For the KPH30i’s, Koss shifted over to a D-shaped ear pad design, and it’s but the first of many comfort delights. The D-shaped pads are very easy to place correctly onto your ears since they have the same shape, and the padding is soft enough to evenly spread pressure across the whole ear.
Helping with this are the hybrid suspension headband and the new auto-adjusting ear cup joint. The headband has a wide range of standard click-style adjustment, and a silicone suspension band on top that will help it conform to your head’s personal shape. The old ball-and-socket joints on the ear cups are gone, replaced with a cool hourglass shaped rubbery grommet thing that instantly snaps into the right position for your ears.
Once you get the adjustment sliders set correctly for your head size, you just put them on and have a nice, light fit. It’s wonderful, and it’s my favorite fit ever in a 60Ohm Koss Headphone. It doesn’t pinch my ears or my glasses arms at all, and stays comfy for hours.
BUILD/DESIGN
Build quality fans will be a little let down by everything on the KPH30i’s… except for the cable. The 3.5mm plug is robust, with a long thick plastic housing featuring the Koss logo and a metal spring strain relief. The rest of the cable below the y-split lives up to this level, with a thicker rubber surround than many other small Koss models. It has a little bit of spring to it but it’s not very tangle-prone.
The headband and headphone are constructed almost entirely out of plastic and silicone. There’s no metal frame for reinforcement here, but that also makes these tremendously light, helping with their comfort. They’re not robust traveling or studio headphones, and they’re not really made for sitting on or throwing around.
Still, the many new features of the design do enough to overcome the plastic build. And like every Koss headphone product, they have a lifetime warranty.
EXTRAS
The in-line play/pause button works fine on both my MacBook and Android phone, and I like that it’s big and runs most of the length of the barrel it’s attached to. It’s easy to grab and click without feeling around too much. The included microphone is serviceable for phone calls. It’s able to suppress a little background sound, but your voice will sound the same way it does coming out of any of these small phone-style microphones.
Honestly though, it’s incredible to get either of these features at this price point with sound quality this pleasant. I wouldn’t have blamed Koss at all for including just a plain cable.
FINAL THOUGHTS
This is the most easily-recommendable audio thing I’ve reviewed in a very long time. It’s a great starter open-back headphone with sound quality that I’d find good in a $150 product. It’s not the most robust thing ever built, but its new design touches make it a worthy addition to the pantheon of “Shockingly Good Koss Headphones.”
These are more than worth the $30 they cost, and they’re completely free of the weird gimmicks that so dominate this industry. All of the little design features work exactly the way that Koss says they will, and they’re paired with great sound. It’s important in such a buzzword-focused market to support companies that tell you the truth, and these cost the same as 4 to 5 cups of fancy coffee. Go forth with confidence that you’re about to see what “good sound” is all about without breaking your wallet. | https://medium.com/@xander51/koss-kph30i-headphones-review-1c209558b920 | ['Alex Rowe'] | 2019-04-26 20:01:00.527000+00:00 | ['Music', 'Headphones', 'Gadgets', 'Technology', 'Audio'] |
1,477 | Using Shared Libraries in a Jenkins Pipeline | Extending code libraries across Jenkins pipelines can seem like a daunting task. There are many different ways to accomplish the same thing in a Jenkins pipeline. Deciding between declarative or scripted pipelines. Choosing where and how to import libraries. There’s a laundry list of tasks for the Jenkins developer to work through. Not to mention keeping track of the environment and state of a pipeline as it evolves. So what’s the best way to share common functionality in different pipelines?
Using shared libraries (or Global Pipeline Libraries) we can easily and automatically pull reusable functionality into pipelines. Configuration is handled by Jenkins and there’s no messing about with manual path entries or environment variables. Shared libraries save implementation time, allow for easier code reuse and are simpler to manage in the long run.
Let’s take a look at how to setup a simple shared library and implement it in one of our pipelines.
Build the library
In order to use shared libraries, we must first layout the proper directory structure in the repository and write some Groovy code. For this example, we’ll use a basic Groovy class file that simply runs echo and outputs a message in the Jenkins console. Let’s setup a new directory for this inside our existing Git repository (this can be wherever you store your Jenkinsfiles and any accompanying Groovy files):
src/org/mytools/
Inside this directory you’ll create a new Groovy file called:
Tools.groovy
You should now have the following structure:
src/org/mytools/Tools.groovy
Now open up the Tools.groovy file and place the following code inside:
package org.mytools class Tools implements Serializable { private static final long serialVersionUID
def steps Tools(steps) {
this.steps = steps
} void myEcho(String msg) {
steps.echo msg
} }
Let’s break down what is happening here. In the first line we’re defining the package that this will be part of. This is a Java package declaration which let’s us group code together as one unit (most of the files inside this directory should be in the same package).
In the next line we’re defining our class Tools which is an implementation of the Serializable class. This enables us to save state on our classes during a Jenkins pipeline. The serialVersionUID provides the variable for saving a unique state ID on the object.
The next few lines inside the Tools class handle pulling in the steps variable from the calling pipeline. We have to accept the steps as a parameter because classes in shared libraries have no knowledge of pipeline step functions like echo or sh . This tells our class that we expect this to be passed in as a parameter when instantiated.
Finally, we setup our myEcho function which simply accepts a string and then uses the pipeline echo step to print it to the console. This wraps up the class and will be where we place the bulk of our functionality.
That’s it for our directory structure and Groovy configuration. If at any point you get stuck setting up your library, you can reference the official Jenkins documentation on shared libraries.
Next, let’s look at how we pull this new package into Jenkins.
Configure the library in Jenkins
In order to use shared libraries inside of a pipeline in Jenkins we need to configure the library. Open up the Jenkins instance and navigate to Manage Jenkins -> Configure System. Scroll down until you locate the Global Pipeline Libraries section. Enter the relevant information as it pertains to your Git repository and checkout details:
Configuring a Global Shared Library in Jenkins.
You can leave the defaults checked/unchecked, but make one consideration regarding the Load Implicitly box. If you wish to have your libraries available automatically in every pipeline you may check this box, otherwise leave it blank and we’ll cover importing them manually. In the beginning I would advise leaving this unchecked so you are explicit about what libraries get used.
Once the setup of the library and associated repository is complete, let’s move on to actually utilizing it in a pipeline.
Import the library in a pipeline
Using shared libraries in a pipeline is incredibly simple and you’ll wonder why you hadn’t done it sooner. After building and configuring the shared library in Jenkins it is now available to pipelines and only needs to be declared. Let’s look at a sample pipeline that uses our new library:
@Library('mytools') import org.mytools.Utils Tools tools = new Tools(this) pipeline {
agent any
stages {
stage('test') {
steps {
script {
tools.myEcho('test')
}
}
}
}
}
In the example pipeline above we add several items before declaring the pipeline itself. The most important element is the @Library entry. This tells the pipeline to use our new shared mytools library. However, this alone is not sufficient, we need to tell the pipeline which classes to use and then instantiate them.
First we’ll need to import the Utils class and then create a new object for it by calling new Tools(this) . Remember earlier when we said we’d have to pass in our steps? By providing the this keyword as a parameter to our class we provide the necessary pipeline context for things like echo .
Now that our object has been imported and created we can use it just like any other object. Inside any script blocks within a declarative pipeline we can simply call tools.myEcho and observe the output.
Shared libraries are straightforward and efficient. They save time and allow more of the focus to be on the core functionality of the pipeline, instead of where resources are and how they get imported. With shared libraries we can easily add more classes, modules and other Groovy objects without having to jump through hoops. | https://medium.com/swlh/using-shared-libraries-in-a-jenkins-pipeline-d20206943792 | ['Tate Galbraith'] | 2020-12-17 20:35:37.447000+00:00 | ['Jenkins', 'Technology', 'Software Development', 'Programming', 'Code'] |
1,478 | Our programming and design school: reducing the high school dropout rate, a bit (and a byte) at a time! | I’m passionate about talent management and culture. My objective is to make Osedea the best workplace for our team.
Follow | https://medium.com/osedea/our-programming-and-design-school-reducing-the-high-school-dropout-rate-one-bit-byte-at-a-time-4cefa3b7a329 | ['Ivana Markovic'] | 2020-09-09 14:58:53.333000+00:00 | ['Technology', 'Learning To Code', 'Community', 'Education', 'UX'] |
1,479 | Tech Investment & M&A Trends Continue to Rise in 2021 | When the coronavirus had first hit the U.S. and other countries early last year, clobbering the economy and businesses, many technology investors could not have predicted the future which is today. With numerous job cuts, broken supply chains, and profit warnings, business models were flipped on their heads. However, with the ongoing vaccine deployment by nations throwing the worst behind us, companies have now started forming digital transformation strategies to recuperate from the crisis.
Predictions for 2021 are catalyzed by the disruptive forces of the COVID-19 pandemic, which are expected to alter the fate of the global business environment for the near future.
“For future-fit IT leaders, the risks aren’t limited to the data center or network outages. Today’s risks include rapidly changing consumer trends that require digital pivots, increasingly complex security concerns, the ethical use of AI”, according to Brian Hopkins, an analyst at Forrester research.
While many businesses have been quick to realize the need for digitizing business processes their business models to survive, some are still making their way through. The rapid digitalization driven by the pandemic will continue into the recovery. The pandemic has been a testament to the fact that the ability to adapt and respond to any sudden shortcomings is a strong determinant to success in a highly digitalized economy. In order to reorganize business operations to align with the conditions the pandemic has left us with, organizations are re-strategizing their IT spending plans.
According to IDC, by the end of 2021, based on lessons learned, 80% of enterprises will put a mechanism in place to shift to cloud-centric infrastructure and applications twice as fast as before the pandemic. The pandemic-driven remote working environment has accelerated demand for cloud modernization and IT automation, leading companies to further investments in the space.
“We’ve seen two years of digital transformation in two months”, Microsoft CEO Satya Nadella.
The remote-work fashion has also handed fraudsters a new and very tempting field of play, leading to a surge in phishing and ransomware attacks. This has accelerated the need for the already booming industry of cybersecurity. According to Forbes.com, 55% of enterprise executives plan to increase their cybersecurity budgets in 2021 and 51% are adding full-time cyber staff in 2021, as companies invest to support remote working and a rapid move into cloud-based software services. The response to the crisis continues to press department budgets and limit resources for other less essential functions — a situation that we believe will direct investments in 2021.
The pandemic-led lockdowns have also affected consumer behavior in ways that have and will continue to spur growth in the field of Artificial Intelligence. As consumers buy more online to avoid the new risks of shopping in stores, they are giving sellers more data on preferences and shopping habits. With the post-Covid 19 resurgences of investment in these sectors, many firms are seeking to significantly step up investments in AI and Data. International Business Machines Corp estimates that only about 20% of companies use such AI-powered technology today, according to a report by Economic Times. But they predict that almost all enterprises will adopt it in the coming years.
The crisis led to many people working from their homes, which companies such as Zoom, DocuSign have benefitted from. Forrester expects remote work to will rise to 300% of pre-COVID-19 levels. The circumstances, however, worked wonders for some technologies and helped create a frothy market for initial public offerings and technology M&A that should extend well into this year. Riding on the remote working environment, Salesforce agreed to buy fellow software company Slack for an eye-popping $27.7 billion, highlighting how important Slack’s workplace collaboration technology has become. With consumers staying indoors, the demand for food delivery services has also exploded. DoorDash, which popped more than 80% in its IPO, valuing the food delivery company at $71.3 billion, was one of the biggest food delivery market debuts. A number of big Silicon Valley companies, including Palantir Technologies Inc and Snowflake Inc, also had blockbuster IPOs, riding on a stock market rally in the second half of the year that was fueled by stimulus money and hopes of an effective COVID-19 vaccine.
Actions adopted to drive the digital journey by the adoption of technologies due to the pandemic are expected to further spur M&A in the field of cloud, AI, cybersecurity, data analytics.
With 52% of executives who pursued digital technologies via M&A saying that the approach exceeded expectations and 45% reported similarly for digital partnerships, 2021 is set to see an increase in deals, corporate venture capital, and partnership investments, according to a report by EY.
As always, entrepreneurs who lead their companies to make the right investments and build strategic partnerships during times of crisis have emerged stronger, through increased access to capital, large global clients, and delivering digital products and services that disrupt the market in a still-uncertain economy. | https://medium.com/@saglobaladvisors/tech-investment-m-a-trends-continue-to-rise-in-2021-edc76c3a68e6 | ['Sa Global Advisors'] | 2021-06-23 06:01:06.034000+00:00 | ['Mergers And Acquisitions', 'Technology', 'Cloud Computing', 'Covid 19', 'Deal'] |
1,480 | Pension Central is currently helping employers fulfil their monthly pension remittance obligation. | Pension Central is currently helping employers fulfil their monthly pension remittance obligation. ChamsAccess Jul 5·3 min read
The Pension remittance space in Nigeria is filled with so much great promise, but still faced with inefficiencies particularly in funds reconciliation and delivery to PFAs. For some organisations, pension remittance is a task that is usually dreaded, particularly the continuous monthly cycle of signing multiple payment cheques, processing multiple schedules, and visiting multiple banks to submit each schedule. Ultimately, the manual efforts being done most times lead to funds not hitting Pension Fund Administrators (PFAs) till days and weeks, even months.
Indeed, the pension industry deserves a remittance solution that eases and solves problems around pension schedule preparation, remittance and funds reconciliation.
In a bid to bringing efficiencies to this space, Pension Central was launched, with a promise of enabling organisations remit pension funds easily and securely, with few clicks, guaranteeing same-day delivery of funds to all PFAs and eliminating all reconciliation issues.
Organizations can sign up for a free employer account in seconds by inputting just their PenCom code which is then validated to be true, after which they enter their email addresses, reset their default password and they begin to remit! Existing solutions don’t provide a sign-up process as seamless as this, with companies still having to complete some paper and leg work, going to a bank, then having to wait some days before signup process can be completed.
We basically want employers and corporate organisation to focus on the core of their businesses and allow Pension Central take care of their pension remittance processes.”
Every year, organisations are expected to get their PenCom Compliance certificate, a report that attests that those organisations successfully remitted pension for the year. That process is currently filled with lots of manual processing for most organisations who have to compile pension schedules and payment receipts for every pension schedule processed all through the year, then compile for processing. However, those processes have been completely eliminated, all schedules and receipts are accessible in one place, no need for manual efforts, no need for multiple paperwork.
All with few clicks, Pension Central enables organisations to process their compliance reports for the year seamlessly.
Some of the Key Benefits are:
Pandemic Compliant (e.g Covid-19): Pension remittance through your computer from the comfort of your office/home eliminates any form of exposure to any virus.
Pension remittance through your computer from the comfort of your office/home eliminates any form of exposure to any virus. Instant Delivery of Schedule & Payment : By integration, all pension schedules and payments are delivered to PFCs, shortening the remittance cycle of pension from days to seconds.
: By integration, all pension schedules and payments are delivered to PFCs, shortening the remittance cycle of pension from days to seconds. Operational Overhead Reduction: No more preparation of multiple physical schedules, signing of cheques, and physical delivery to the banks. Schedule and payments are purely electronic.
No more preparation of multiple physical schedules, signing of cheques, and physical delivery to the banks. Schedule and payments are purely electronic. Payroll Integration: flexibility of integrating with Payroll systems so that all pension items for all employees are extracted and remitted to PFAs on the fly
Since launching in October 2020, Pension Central has helped over 500 employers to seamlessly fulfil their monthly pension remittance obligation till date with an impressive return rate of close to 90% of signed up organisations month on month.
Pensioncentral.ng is an initiative of Chamsaccess Limited, Nigeria’s leading provider of Identity and Access Management Solutions, located in Victoria Island, Lagos Nigeria. | https://medium.com/@chamsaccessltd/pension-central-is-currently-helping-employers-fulfil-their-monthly-pension-remittance-obligation-4860d094dbfa | [] | 2021-07-05 10:19:14.925000+00:00 | ['Technology', 'Pension Tech', 'Pension', 'Startup'] |
1,481 | JavaScript Basics — Prototypes. JavaScript uses the prototypical… | Photo by Ivan Jevtic on Unsplash
JavaScript is one of the most popular programming languages in the world. To use it effectively, we’ve to know about the basics of it.
In this article, we’ll look at JavaScript prototypes.
Prototypes
Prototypes are objects that other objects inherit from.
It’s the way that inheritance is done in JavaScript.
For instance, if we create an empty object:
const empty = {};
We can call the toString method on it.
We can write:
console.log(empty.toString());
If we call that, we get [object Object] logged in the console log.
Even though our object has nothing defined, they still have inherited properties.
The toString method is from the Object.prototype property, which lost objects inherit from by default.
If a property isn’t in the object itself, then the JavaScript interpreter will search its prototypes for the property.
We can use the getPrototypeOf method to get an object’s prototype.
For instance, we can write:
console.log(Object.getPrototypeOf(empty));
And we get an object with a bunch of functions logged.
If we log the empty object, we can see the __proto__ property with the same thing.
Many objects don’t inherit directly from Object.prototype .
For instance, functions inherit from Function.prototype , which inherits from Object.prototype .
And arrays inherit from Array.prototype which inherits from Object.prototype .
Object.create
We can use Object.create to create an object with the prototype we pass into the method.
For instance, we can write:
const protoDog = {
speak(words) {
console.log(words);
}
} const dog = Object.create(protoDog);
Then we see that when we log the value of dog , we see the __proto__ property with the speak method, which is what we have in protoDog .
Classes
Classes in JavaScript are nothing more than syntactic sugar on top of its prototypical inheritance model.
A class defines the shape of a type of object by listing its methods and properties.
An object created from a class is called the instance of a class.
To create an instance of a given class, we’ve to make an object that derives from the proper prototype.
Classes are just constructor functions with better syntax.
We can define constructor functions to return an object as fikkiwsL
function Dog(name) {
this.name = name;
} Dog.prototype.speak = function(words) {
console.log(`${this.name} says '${words}'`);
};
We have the Dog constructor with the speak method in its prototype.
Then we can create a Dog instance by writing:
const dog = new Dog('james');
dog.speak('hello');
All functions automatically get a property named prototype , which we can put instance methods into.
Names of constructors are capitalized so that they can be distinguished from other functions.
If we check the prototype, we get that:
console.log(Object.getPrototypeOf(dog) === Dog.prototype);
logs true .
Better Syntax
To make our lives easier, we can use the class syntax to rewrite our Dog constructor.
For instance, we can write:
class Dog {
constructor(name) {
this.name = name;
} speak(words) {
console.log(`${this.name} says ${words}`);
}
}
A class starts with the class keyword.
Then we have the constructor to replace the constructor function body.
The speak method is now inside the class instead of adding it to the prototype property.
Class declarations only allow methods to be added to the prototype.
Classes can be used in statements and expressions since they’re just constructor functions.
So we can write something like:
const obj = new class {
hello() {
return "hello";
}
};
We created a class expression that’s used immediately with the new keyword.
Then we can call obj.hello by writing:
console.log(obj.hello());
Photo by Maarten Deckers on Unsplash
Overriding Inherited Properties
We can override inherited properties by using the extends keyword.
For instance, we can create a subclass from a class by writing:
class Dog {
constructor(name) {
this.name = name;
}
speak(words) {
console.log(`${this.name} says ${words}`);
}
} class GoodDog extends Dog {
speak(words) {
console.log(`good dog ${this.name} says ${words}`);
}
}
We created a GoodDog class that is a subclass of Dog .
The extends keyword indicates that we inherit all the properties from Dog ’s prototype.
Conclusion
JavaScript uses prototypal inheritance.
We inherit properties that are from an object’s prototype.
Most objects have Object.prototype as its base prototype.
To create objects with a fixed set of members, we can use constructor functions.
To make creating constructors easier, we can use the class syntax.
JavaScript In Plain English
Enjoyed this article? If so, get more similar content by subscribing to our YouTube channel! | https://medium.com/javascript-in-plain-english/javascript-basics-prototypes-bd5ba0663157 | ['John Au-Yeung'] | 2020-06-14 17:10:42.652000+00:00 | ['Programming', 'Technology', 'JavaScript', 'Software Development', 'Web Development'] |
1,482 | Elon Musk’s Neuralink: The Game Changer or A Rushed Initiative? | Technology / Artificial Intelligence
Elon Musk’s Neuralink: The Game Changer or A Rushed Initiative?
There seems to be a lot going behind the scenes
Photo by Bret Kavanaugh on Unsplash
Earlier this year, on the 28th of August, Elon Musk held an event to demonstrate his ambitious device, Neuralink. The updates on the prototypes of this unique initiative took the internet by storm. However, in this piece, we are about to dive deeper into the world of the Neuralink Corporation. And we will also layout details on what has been happening behind the scenes. According to sources, there has been a conflict in the corporation regarding the futuristic tech.
But what is Neuralink and what’s the hullabaloo all about?
A Brief Overview of Neuralink
Neuralink, a relatively tiny device, is a brain-machine interface, to be connected to the brain soft issues, via a set of thousands of tiny wires. The ‘installation’ process includes a tiny robotic sewing-machine attaching the microwires to the brain.
These implants cause neither bleeding nor any noticeable trauma to the brain tissue. As a result, there is no need for anesthesia, and the patient subjected to the implant will be able to leave the facility within a few hours.
The purpose of Neuralink is to support treatment against various medical conditions. This device is projected to help in the following aspects:
Severe brain injuries
Paraplegia
Certain cases of blindness
Anxiety disorders
How does it do that? Neuralink can send and receives voltages to the brain, safely. In fact, Neuralink, as a concept, is not that fancy of an idea as it sounds. It is indeed a decades-old tech in the field of brain-machine interface. The idea of using computers to improve brain functions has even been implemented in the past, to various degrees of success. Neuralink is the next big step in the same direction, adding an overwhelming amount of precision and effectiveness.
The Conflict Plaguing Elon Musk
The conflict within the Neuralink Corporation was inevitable. The company operates hand in hand with the fast pace of tech development and also with the slow-cautious nature of medical science. This has been causing friction. The mechanical engineers of the company often differ with their academic neuroscientists over the strategical roadmap. In these cases, it’s been observed that Musk habitually sides with the engineers. The chaotic nature of the project led most of its founding scientists to leave: only 3 out of 8 remains.
For example, Musk pushed forward to implant 10,000 electrodes at a go into the brain of a sheep. According to the scientists, instead of taking smaller steps, like implementing a smaller number of electrodes, the company aimed to jump across the river. Consequently, it resulted in the project to sink.
Deadlines were rushed as well. Sometimes, scientists insisted that they need months for the research. In turn, they were offered weeks to complete the projects.
The Takeaway
No employee, whether former or present, entertain the speculation that Neuralink is underperforming. Despite these conflicts and chaotic approach Neuralink corporation is praised heavily for its engineering accomplishments. Especially, the sewing machine robots earned appreciation from the specialists. Dr. Andrew Schwartz, a Professor of Neurobiology at the University of Pittsburgh, said “Overall, the concept is impressive and so is the progress that they have made.”
Musk usually operates with an accelerated timeline and a hefty dose of chaos. This is a strategy he used to transform the space & automobile industry successfully. Even if it is highly unlikely that the device will be ready for human trials anytime soon, Neuralink is making progress. And it is doing it fast.
Lastly, if Neuralink manages to become a reality at an affordable price point, it will change the world forever. It will explore the full potential of our mind to take humanity a step closer to telepathy, memory repositories, and superhuman interfacing with artificial intelligence. Most importantly, it will help millions to regain motion, sight, hearing, and memory. And Musk, along with the entire team behind Neuralink deserves some credit for venturing into such an undertaking. | https://medium.com/age-of-awareness/elon-musks-neuralink-the-game-changer-or-a-rushed-initiative-61ed5549a71 | ['Anirban Kar'] | 2020-09-18 05:34:07.070000+00:00 | ['Technology', 'Artificial Intelligence', 'Neuralink', 'Elon Musk', 'Neuroscience'] |
1,483 | Study Finds Bad Web Design is Killing Us All With Stress | It’s a devil’s bargain to be a tech tester. Sure, you get to see the latest and greatest things, but you also are essentially being digitally poked and prodded to measure everything. This new study by Cyber Duck UX Agency out of the UK went further with digital torture to measure stress levels.
Participants were placed in front of a website designed (by Cyber Duck) to be filled with some of the worst user experience (UX) elements we encounter online. Images not loading, weird animations, and buttons that don’t work were just a few of the 10 categories.
Each category of web torture had 110 participants, so 1,100 people aged 20 to 58 total were part of the study altogether. Researchers measured each person’s systolic blood pressure (the amount of pressure in arteries during heart contraction) as the participant used one of three of the custom-built sites. Normal blood pressure measures at 90 to 120, elevated is 121 to 129, and hypertensive stage one is 130 to 139. After that, you’re in the high BP category; higher than 180, you’re in crisis. No participant had any known health conditions; they also didn’t have any IT or web UX background.
The results speak for themselves and indicate exactly what you should not do in any website design. The chart above shows the offenders, with the items on the left being the worst. Slow-loading pages (taking 8.8 to 10.5 seconds to fully load) caused a 21% spike in blood pressure, while multiple pop-ups and auto-play music were almost as bad, causing a 20% increase each. Items such as 404 broken pages and non-clickable buttons were just middling annoyances in comparison.
Other problems include multiple image sliders-so slideshow carousels weren’t that bad, huh?-causing a 10% increase in blood pressure and disorienting animations at a mere 5% increase.
So clearly, if you want to murder someone, find them the slowest webpage full of pop-ups and make them surf.
For an interactive version of the chart above, see below. You can blow it up to full-screen size for easier reading, to spare yourself a little stress. | https://medium.com/pcmag-access/study-finds-bad-web-design-is-killing-us-all-with-stress-d441dbdf88fd | [] | 2020-12-28 18:02:42.753000+00:00 | ['Internet', 'Technology', 'Web Design', 'Stress'] |
1,484 | Cloudflare Loves to Mess With Incumbents | Cloudflare Loves to Mess With Incumbents
And that’s a very good thing for the Internet
We don’t usually cover news here at Shift, this is more of a place for analysis and Thinking Out Loud. And it’s rare that one company appears more than once here in any given year. But today — again — Cloudflare has upended an important piece of Internet’s real estate, and it’s just too rich to not note the why of it.
So first the news. To celebrate the company’s eight birthday, Cloudflare is announcing the launch of a domain registrar. And because the company operates at massive scale, and can afford to do things most companies simply can’t (or won’t — looking at you, Google, Amazon, Facebook) — the company is offering domains *at cost.* In other words, Cloudflare isn’t making one red cent when you register a domain with them. What they pay to register a domain (and yes, that number is fixed, and the same for all domain registrars), is what you pay to register a domain.
Go ahead, go sell (or short) your GoDaddy stock. I’ll wait.
OK, you back? Look, I’m not writing this post because I think the news is *that* exciting, though I’ll tell you, I’ve not found many folks who love their domain registrar. I certainly don’t. Most of them are experts at confusing you, at upcharging you, and at scaring you that you’re about to either lose your domain or miss some important feature you didn’t know you want or need. I pay an average of about 15–20 bucks for each of the domains I own each year. Cloudflare’s price is about eight dollars.
I own close to 50 domains. That means I’ll save nearly $400 a year when I move all my domains to Cloudflare. That’s real cheddar.
But the real reason I’m writing this post is to point out what a merry market discombobulator Cloudflare has become. (Read more over at our new open web site). | https://medium.com/newco/cloudflare-loves-to-mess-with-incumbents-64e81aacdc6f | ['John Battelle'] | 2018-09-27 13:44:47.215000+00:00 | ['Technology', 'Management', 'Internet', 'Business', 'Leadership'] |
1,485 | New for Agents: ListPacks®, Premium Shareables, ListReports LIVE, and more! | New for Agents: ListPacks®, Premium Shareables, ListReports LIVE, and more!
From ListPacks® to ListReports LIVE and more, you won’t want to miss this update on our newest features for agents.
ListPacks®
Instantly share curated, customizable collections of listings on social media or directly with your buyers.
Capturing new leads has never been easier, or more delightful. With just a few clicks, agents can now share beautiful “packs” of listings with their buyer community.
Choose from pre-made categories such as “Homes with pools” and“Single-level homes”, OR set simple preferences to create your own custom ListPacks.
Everybody enjoys looking at homes — serious buyers and lookie-loos alike. ListPacks® provide consumers a truly differentiated experience with unique features including our interactive neighborhood infographics. Even better, each listing prominently displays your contact info, keeping viewers tightly connected to you when they’re ready to express interest in a particular property.
Ready to get started? Click here to start sharing ListPacks® now! | https://blog.listreports.com/new-for-agents-listpacks-premium-shareables-listreports-live-and-more-c62bb0a4816 | ['Ryan Terrigno'] | 2021-06-14 18:14:58.591000+00:00 | ['Real Estate', 'Startup', 'Tech', 'Technology'] |
1,486 | The IoT in Elevators Market Is Rising | IoT (Internet of Things) technology can enhance the service of elevators, improve their safety, and facilitate the upgrading of key components. It can also help provide lower waiting periods, offer mobile connectivity, and facilitate the use of more power-efficient systems.
Over the last decade, advances in IoT and AI have drastically changed the way elevators function. Paired with a surge in urbanization and a growing need for residential and commercial amenities, as well as a requirement for technologies that allow for smooth, safe, and efficient people flow, the sector has become a particularly lucrative option for the coming years.
As organizations look into the implementation of large-scale smart devices to face internal and external challenges, it’s expected that more intelligent and connected IoT elevators will steer a market expansion in the years to come, in particular in the remote monitoring sector.
How IoT Elevators Work
The IoT elevator market includes hardware (such as M2M gateway and Elevator Gateway), software, on-premise and cloud services, design, engineering, installation, refurbishing, maintenance, and repair.
The advantage of IoT devices is that they can manage large streams of performance data and predict maintenance requirements, taking a fraction of the time and effort a human technician would need to do the same analysis.
These devices can monitor operating conditions such as critical safety circuits, load weighing, number of door cycles and trips, wait times, and traffic trends. They can also improve preventive maintenance schedules because they work on a model based on the observed heat, friction, and noise, keeping track of wear and tear.
The constant stream of real-time data provided by IoT elevator monitoring devices can enable service professionals to diagnose problems before even reaching a building, saving valuable on-site time. Once they get to the elevator, a technician can already have corroborating information and suggested actions, and spend their time focused on fixing the problem.
Remote monitoring can also catch problems when they start, as opposed to doing so when an elevator is noticeably showing issues or going out of order. This saves potential significant down-time, which can now be planned and carried during off-peak hours if necessary.
Key Players
Amongst key players in the IoT elevator market are Otis Elevator Company, Toshiba Elevators, Mitsubishi Electric Corporation, ThyssenKrupp AG, KONE Corporation, and Schindler Group. Among the latest developments in IoT elevators are IBM Watson/KONE, Otis/Digi, and Huawei.
IBM’s Watson IoT platform can already monitor elevator performance based on parameters and data processed in real-time in the cloud. The first manifestation of its specific services is KONE 24/7, a service that uses Watson and Predictive Maintenance Insights to bring tailored intelligent services to elevators. The AI system synthesizes incoming data, analyzes it, and provides the results to technicians who can identify the issues and make proactive decisions before potential problems become an issue.
Otis Elevator recently won the Business Impact Award at Digi International’s Global IoT Conference. The company developed a predictive maintenance solution that improves response rates in hundreds of thousands of locations worldwide. The Otis ONE platform personalizes the service experience through real-time updates, proactive communication tools, and predictive maintenance insights for all mechanical, electrical, and electronic components of an elevator.
Huawei Elevators Connection Solution is based on SDN and edge computing, and manages millions of elevators with interconnecting third-party IoT platforms. Their gateway supports pre-analysis and collaboration between devices and the cloud, implementing reliable preventive maintenance in real-time. The solution also provides in-depth learning from Big Data and includes AI fault identification and health assessment.
The Future of the IoT Elevators Market
Due to IoT and AI, smart elevators can generate data, identify problems, and make decisions on maintenance in real-time. This can reduce unexpected breakdowns and avoid disruptions to the movement of people within buildings, removing also the need for human technicians to frequently visit a site.
The ability for remote technicians to monitor heat changes, friction, and noise fluctuations, as well as more facilities to maintain equipment, are expected to drive market trends in the coming years. The elevators of the 2020s and 2030s will have a sort of ‘mind of their own,’ sending messages to the server in a smart building ecosystem that will automatically work out the best user experience.
The IoT elevators market is expected to gain traction in the coming years because of the massive implementation of smart equipment in high-rise buildings, malls, hospitals, and hotels. This is particularly the case for the North American market, where the presence of major players and the availability of a strong IoT infrastructure in the region, as well as a large proportion of digitization and technological breakthroughs, will be able to offer new growth avenues.
The next generation of IoT elevators could include machines that can be called with a mobile device, hold the doors longer for elderly passengers, help determine the optimum time to leave and return home, and offer a tailored journey based on facial recognition. | https://yisela.medium.com/the-iot-in-elevators-market-is-rising-de6bfa50f5bf | ['Yisela Alvarez Trentini'] | 2020-10-26 10:15:02.583000+00:00 | ['Technology', 'AI', 'Elevator', 'IoT'] |
1,487 | Introducing Adobe Experience Platform Sandboxes | Introducing Adobe Experience Platform Sandboxes
Companies often run multiple digital experience applications in parallel and need to cater to the development, testing, and deployment of these applications while ensuring operational compliance. Adobe Experience Platform is built to enrich digital experience applications on a global scale. We built a sandbox infrastructure to meet our customer, developer, and partner needs. This blog details our approach, architectural highlights, and what’s next.
Adobe Experience Platform helps brands to build customer trust and deliver better-personalized experiences by standardizing customer experience data and content across the enterprise, enabling an actionable, single view of the customer. Customer experience data can be enriched with intelligent capabilities and governed with robust data governance controls to use data responsibly while delivering personalized experiences. Experience Platform makes the data, content, and insights available to experience-delivery systems to act upon in real-time, yielding compelling experiences at the right moment.
Experience Platform with its access control capabilities and the sandboxes that allow for data and operational isolation enables customers with the right instrumentation for security and data governance, as they work to deliver real-time experiences through our open and extensible platform.
Experience Platform governance capabilities are built with an open and composable approach for brands to customize and use in the way they want. The API-first approach provides extensibility to integrate the features into custom applications and existing tech stacks. Adobe customers are provided with a robust set of access control capabilities that allows them to manage access to resources and workflows in the Experience Platform. Data is contained within sandboxes, providing operational and data isolation to support businesses and their regulatory constraints.
Approach
In order to address this need, Experience Platform provides sandboxes that partition a single Platform instance into separate virtual environments to help develop and evolve digital experience applications. Our sandboxes are virtual partitions within a single instance of Experience Platform, which allow for seamless integration with the development process of your digital experience applications.
A single Experience Platform instance supports production sandboxes and non-production/development sandboxes, with each sandbox maintaining its own independent library of Experience Platform resources (including schemas, datasets, profiles, segments, etc). All content and actions taken within a sandbox are confined to only that sandbox and do not affect any other sandboxes.
Non-production sandboxes allow users to test features, run experiments, and make custom configurations without impact on the production sandbox. In addition, non-production sandboxes provide a reset feature that removes all customer-created resources from the sandbox. Non-production sandboxes cannot be converted to production sandboxes.
In summary, sandboxes provide the following benefits:
Application lifecycle management : Create separate virtual environments to develop and evolve digital experience applications.
: Create separate virtual environments to develop and evolve digital experience applications. Project and brand management : Allow multiple projects to run in parallel within the same IMS Org, while providing isolation and access control. Future releases will provide support for deploying in multiple regions.
: Allow multiple projects to run in parallel within the same IMS Org, while providing isolation and access control. Future releases will provide support for deploying in multiple regions. Flexible development ecosystem: Provide sandboxes in a seamless, scalable, and cost-effective way for exploration, enablement, and demonstration purposes.
Data and Operational Isolation with Sandboxes
Sandboxes in Experience Platform are the fundamental feature for data and operational isolation. Sandboxes help organizations to contain multiple initiatives, production or development-focused, within their own boundaries. With sandboxes, organizations can create distinct virtual environments to safely develop and evolve digital experience applications, with full control on what sandboxes are available to specific users or groups of users. Global multi-brand organizations can capitalize on sandboxes and contain their market or brand-specific digital experience activities within the boundaries of distinct sandboxes.
Sandbox Actions and Sandbox Lifecycle
To start trying out Adobe Experience Platform features you can create a sandbox. If you have turned a few knobs and experimented with a few of the features and want to start over to try something different you can Reset the sandbox. This operation resets the sandbox to the initial state. After you are done playing around and no longer need the sandbox, you can Delete the sandbox.
User Actions on Sandbox:
Create Sandbox: Create a new Sandbox. Reset Sandbox: Resets the same sandbox to the initial clean state. Delete Sandbox: Deletes the sandbox and the sandbox cannot be recovered.
The user actions lead to the following state transitions in the sandbox:
Figure 1: Sandbox user-facing states
When the user hits the Create Sandbox button, the user request is validated. After the user request is validated, the sandbox transitions to a creating state. After the Sandbox is successfully created by the sandbox management service, the sandbox state is changed to Active .
When the user requests to Reset the sandbox, it is transitioned into Resetting state. After the sandbox management service successfully resets the sandbox, the sandbox state is updated to Active . If the reset fails, the sandbox is restored to the previous state.
When the user requests a Delete for the sandbox, the state is changed to Deleted immediately and the user can no longer see or access the sandbox.
The internal state transitions within Sandbox Management Service are as follows:
Figure 2: Sandbox internal states
The internal state of the sandboxes managed by the sandbox management service is more tightly linked to the provisioning process and orchestration with the component services. It tracks the user requests and the resources provisioning workflow. After the user requests for a sandbox, Sandbox Management Service internally transitions the sandbox to provisioning state till the provisioning is complete and the Sandbox can be moved to Active state. Similarly, for resetting, the service internally changes state to Resetting till the reset process is complete. For Delete , the resources need to be cleaned up for any data and records before the resources can be safely deleted/deprovisioned. Thus, the sandbox state is first changed to CleanUp , followed by Deprovisioning , and eventually Deleted . If at any point the processes fail, the sandbox is directly transitioned to Failed state.
The internal sandbox states are designed to be useful in multiple ways:
Rate Limiting: Sandbox Management Service tracks the number of user requests that are still being provisioned and the sandboxes that have not yet reached Active state. This helps set limits on the number of user requests that can be made to create sandboxes while there are still requests in-flight for sandbox creation by the same user. The same is applicable for the Sandbox reset operation. Monitoring and Alerting: Sandbox Request Manager Service which is the standalone service for monitoring our Project & Application Lifecycle Management (PALM) infrastructure tracks the number of requests which are in an interim state and uses it to measure the performance and latency in the system. This helps detect any anomalies in the service or raise alerts on the bottlenecks in the system that can be resolved to avoid any incidents. More details on the Sandbox Request Manager are described in the following section.
Let’s dive into approaches we took in architecting Adobe Experience Platform sandboxes.
Architecture
Our PALM infrastructure consists of 2 core components. This blog will be focusing on one of the components: Sandbox Management Service. The purpose of sandbox management service, as the name suggests, is to manage the lifecycle of the sandboxes and manage the metadata about the sandboxes. It enables the users to create, reset, and delete sandboxes. Sandbox Management Service orchestrates the workflow to provision the required resources for the sandboxes by coordinating with ALL the user-facing services on Adobe Experience Platform. The main components in the Sandbox Lifecycle Management are as follows:
Figure 3: High-Level Architecture for Sandbox Management Service
Here are the architecture components in detail:
Platform UI : Adobe Experience Platform UI is the primary interface to manage sandboxes.
: Adobe Experience Platform UI is the primary interface to manage sandboxes. PALM SDK : PALM SDK is an internal SDK for Platform services and not available to customers. PALM SDK makes implementing sandboxes and access control painless for Platform services by abstracting away the complexities and internal details. It is a client library that provides common sandbox lookups and access checks needed by services. API clients are used to interface with the backend services. It is a lightweight library that can be added as a maven dependency.
: PALM SDK is an internal SDK for Platform services and not available to customers. PALM SDK makes implementing sandboxes and access control painless for Platform services by abstracting away the complexities and internal details. It is a client library that provides common sandbox lookups and access checks needed by services. API clients are used to interface with the backend services. It is a lightweight library that can be added as a maven dependency. Sandbox Management Service : Core service for management of the lifecycle of sandboxes. The service was developed on the API First principle and has full REST API Interface support.
: Core service for management of the lifecycle of sandboxes. The service was developed on the API First principle and has full REST API Interface support. Access Control Service : Service that enables users to control access to the Platform workflows, data, and resources, based on their roles and business needs. It is an important component of our PALM infrastructure. More about Access Control Service to come in the following blogs.
: Service that enables users to control access to the Platform workflows, data, and resources, based on their roles and business needs. It is an important component of our PALM infrastructure. More about Access Control Service to come in the following blogs. Provisioning Service : Core Experience Platform service for orchestrating the provisioning of platform cloud resources eg: Data Lake, CosmosDB, and Azure Data Factory.
: Core Experience Platform service for orchestrating the provisioning of platform cloud resources eg: Data Lake, CosmosDB, and Azure Data Factory. Adobe Experience Platform component services: Adobe Services are offered as Experience Platform products and that need to be provisioned for if the user adds those to their license.
As mentioned above, each sandbox maintains its own independent library of Platform resources that includes schemas, datasets, profiles, and segments. This implies that for each Sandbox, all the platform resources need to be provisioned. When the user requests for a new sandbox, the provisioning for each new sandbox is initiated by the Sandbox Management Service as it is responsible for managing the lifecycle of the sandboxes. It registers the user request and then contacts the provisioning service with the details to provide the required platform resources for the user.
Provisioning of the resources is not a trivial process and the time required is in the range of minutes. The communication with the Provisioning Service and Sandbox Management is Asynchronous Event-Based. It is based on Kafkas’ event-based messaging framework.
Challenges
There were five challenges we had to face and address. Below details of our challenges, our approach, and how we solved it.
Single collection for multi-item transactions and Stored Procedures
Several workflows within the Sandbox Management Service require transactional processing since more often than not multiple models need to be created/modified within the same workflow. Here is the documentation from Microsoft Cosmos DB
Here is a simple resources provisioning workflow:
Update the provisioning status when the provisioning success message is received. Update the sandbox status to Active. Trigger requests to replenish the pool.
The above workflow provides a guarantee that every time a sandbox is used up from the sandbox pool, it is replenished immediately. This workflow requires multiple records to be updated in a single transaction.
The underlying database used for storing the metadata is Azure CosmosDB which is a no-SQL database. Leveraging the schema-less nature of the database, we use a single collection for storing all the models. Within the same collection, the Record Type field will identify the type to which a particular record belongs to. All the queries run on CosmosDB will have a filter on this field for operating on certain models.
SELECT * FROM c WHERE c.recordType = "record_type_a"
The collection is logically partitioned with a predetermined partition key. CosmosDB supports transactions within a logical partition of a collection using Stored Procedures. We partition the collection based on the partition key such that all of our operations that require transactions are confined within the same logical partition. During the course of execution of the stored procedure, if there is an exception, the entire transaction is aborted and rolled back. Depending on the type of exception Stored Procedure can be retried. This gives us a very powerful programming model for managing transactions in PALM workflows.
JavaScript-based stored procedures are installed on application start-up. While installing the stored procedure the current version of the stored procedure is checked and if it is unchanged the existing stored procedure is preserved. If the stored procedure was updated and the current version of the stored procedure is higher than the installed stored procedure, the installed stored procedure will be updated with the latest version.
Here is the above workflow using Stored Procedures:
List<UpdateOp> updateOpsList = new ArrayList<>(); UpdateOp updateOp1 = new UpsertOp(updatedProvisioningStatusRecord)
UpdateOp updateOp2 = new UpsertOp(updatedSandboxStatusRecord)
UpdateOp updateOp3 = new UpsertOp(newReplenishRequestRecord) updateOpsList.add(updateOp1);
updateOpsList.add(updateOp)2;
updateOpsList.add(updateOp3); try { triggerStoredProcedure(updateOpsList); } catch { // retry }
Sandbox Pooling
Sandboxes each have all the platform resources provisioned. After the resources are provisioned permissions need to be applied to the created resources. These two operations need to happen sequentially. It also involves communication and handshake between the platform services during the entire process. The entire end-to-end workflow takes anywhere between 12 to 20 minutes to complete. This is not the best experience for the users who want to try out certain features of the platform or set up custom jobs. The SLT’s are much higher.
SLT targets for the Sandbox Management Service Use Cases we had to solve:
As an Adobe Experience Platform user, I want to be able to use my newly created sandbox in a maximum of 30 seconds from submitting the CREATE request. As an Adobe Experience Platform user, I don’t want to see my deleted sandbox anymore after a maximum of 30 seconds after pressing the DELETE button. As an Adobe Experience Platform user, I want to be able to use my resetted sandbox in a maximum of 30 seconds after submitting the RESET request.
The three conditions above should be true for both UI and API calls. To meet the SLTs Sandbox Management Service maintains a warm pool of pre-provisioned sandboxes. The provisioned resources do not have any data/ assets in them till the user actually starts using the sandbox. The provisioned resources thus do not cost anything till they actually start being used by the user. Sandbox Pooling gets triggered in 3 scenarios:
When the customer is first provisioned for Adobe Experience Platform When the max number of sandboxes allowed is increased for the customer. When user creates/ resets a sandbox, to replenish the pool
Note: With pooling we achieved reduction is user sandbox creation time from between 12–20 minutes to 30 seconds i.e the wait time for sandbox creation is reduced by 97.5%.
The size of the sandbox pool depends on the maximum number of sandboxes the user is allowed to create. After the users have used up all the sandboxes in the pool they are either automatically throttled by the exhausted pool or the rate-limiting checks by the sandbox management service. Having multiple redundant checks helps keep the system in check and prevents the provisioning mechanism from getting overwhelmed.
Added advantage of Sandbox Pooling
Pooling also guards against provisioning mechanism failures. This is achieved because the pooling service always maintains the warm pool of a designated number of sandboxes and it always looks at pending user requests when a sandbox becomes available in the pool. The user requests to create Sandboxes will never fail but hangs in creating state at worst for a few minutes if the pool is exhausted till the pool is replenished. When the provisioning completes for the pool replenishment the pooling mechanism looks in the database to check if there are any pending user requests and they are served. And as soon as a sandbox is used up from the pool, the pool replenishment process is triggered. Thus, the sandbox pooling has helped bring down the sandbox provisioning failures to almost ZERO percent.
Although provisioning failures might still occur and affect the pool replenishment mechanism. This is taken care of by the recovery process built in the sandbox request manager service. Sandbox Request Manager Service does this by periodically looking at failed provisioning requests for pool replenishment and retries the provisioning on scheduled intervals. More details on how the Sandbox request Manager Service does this is described in the section dedicated to Sandbox Request Manager Service.
Figure 5: Example performance numbers after pooling implementation.
Monitoring and Recovery Infrastructure
Sandbox Management Service is built to be robust and our PALM infrastructure makes it more reliable with a specialized monitoring and recovery mechanism described below.
Watchdog Service
The nature of the sandbox life cycle involves dependency on multiple services and several steps. At every step of the life cycle, there are chances of sandbox misstep which can lead to failures and bad customer experience. To mitigate those caveats we have our own infrastructure service which maintains the sanity of the sandboxes and fixes the failures. Watchdog service is an executor service where a job can be scheduled and duration can be set for its recurrence.
The sandbox metadata is stored in Cosmos DB and it’s the source of truth for the sandbox state. The sandbox management service is backed by the Cosmos DB, all the transactions and sandbox states are maintained in the database. The watchdog service polls data from the database and identifies the monitoring scenarios. Based on the outcome several metrics are calculated and sent to observability where alerts are triggered. Metrics dashboards are set up on Grafana where a holistic view of the service is observed. The dashboard showcases several charts and bar graphs which helps the service owner understand the operations and performance of the service. Alerts are set up on an internal alerting service that uses Prometheus alert rules to maintain all the alerts.
Figure 6: Sandbox Request Manager Service architecture
Scenarios:
Sandboxes stuck in the intermittent state: The workflow of the sandbox state change requires sandbox job requests with tasks, these tasks are actions that are required to be performed by other services in order to promote the sandbox to a terminal state. The tasks also hold their states which help decide the sandbox request state. The service monitors the sandbox requests tasks which are stuck in a pending state and will send those metrics to observability. Based on those metrics alerts are raised informing the team to take action. Triggering sandbox pooling: The service also holds the position to top up the pool of sandboxes in case of some failures, as the pooling mechanism depends on provisioning there can be failures which leads the sandbox creation/reset/deletion to a failed state. Watchdog service helps to trigger the pool for orgs where the pool is not filled completely, thus serving as a bridge between the failure and getting the pool filled up. Total number of sandboxes: Watchdog service has another job setup that queries the total number of sandboxes for each IMSOrg and its corresponding state and sends it to observability. This metric helps understand the total sandboxes for IMSOrg sliced by sandbox state.
What’s Next
There are currently two sandbox types, each having its own data and operation isolations:
Development: The development sandboxes as the name suggests are used for development-oriented work within an ecosystem whereas the production sandbox can be used for production environments.
The development sandboxes as the name suggests are used for development-oriented work within an ecosystem whereas the production sandbox can be used for production environments. Production: The production sandboxes are used for production environments.
The service will start supporting multiple production sandboxes which will allow the user to create more than one production sandboxes to cater to the needs of data isolation at the production environment level. For example, if the company wants to separate data based on regions it can have separate production sandboxes for each region. Similar to the development sandboxes there is also a warm pool of production sandboxes that maintain a high availability and low latency for the production sandbox operations (create/delete/reset).
Figure 7: Future workflow to create production sandbox in Adobe Experience Platform
Figure 8: Future workflow to reset production sandbox in Adobe Experience Platform
Figure 9: Future workflow to delete production sandbox in Adobe Experience Platform
Follow the Adobe Tech Blog for more developer stories and resources, and check out Adobe Developers on Twitter for the latest news and developer products. Sign up here for future Adobe Experience Platform Meetup.
References | https://medium.com/adobetech/introducing-adobe-experience-platform-sandboxes-9eb000794d6f | ['Jaemi Bremner'] | 2021-04-10 01:56:14.351000+00:00 | ['Developer', 'Open Source', 'Platform', 'Enterprise Technology', 'Adobe Experience Platform'] |
1,488 | My Favorite Pro Creative Apps for 2021 | Bear: The one and only home for all my ideas. The tagging system sold me on this. It’s not the only app with this feature, but it does do it rather well. Instead of categories, Bear allows you to add multiple sorting tags to each note. With categories, one note has one category. With tags, one note can have multiple “categories.”
As a result, the organization is more complete. Which matters to me because connections between thoughts are important. For instance, let’s say I create a note about a new strategy for my trading algorithm. That would obviously belong in the “Code” category. But what if I also want to add it to my post queue, the “Posts” category? I can’t categorize it as both “Code” and “Posts.” But with Bear, I can.
I simply add both “#code” and “#posts” to that one note. Done. Now I can find it under both “#code” and “#posts” in the sidebar. In other apps, I’d have to duplicate the information. Maybe I’d add “post about new trading strategy” to a different note under the Posts category. Obviously, Bear’s tag system is superior.
Bear’s note editor in fullscreen mode. Source: author.
Moving on, the Bear editor is one of the best I’ve used for on-the-fly ideas. Fast. Clean. Effective. Plus, Markdown support is excellent — I can format quickly even on my phone. This was a problem with my ex-idea jotter, Agenda. Editing was too slow. In contrast, there’s less friction with Bear. A big deal when I’ve got three things to write down at the same time.
Now, I’m punctilious when it comes to UI. I probably apply more selection criteria to user interfaces than I’ll apply to my wife. Corner radius of a single button looks off? Junk it. There’s no compromise when it comes to design. I look at these tools for hours a day. They must be pretty. And I’m happy to report Bear passes all the tests. In fact, I might even call it impressive. Dark. Clear. Intuitive. I can get behind the aesthetic.
The Dieci theme suits my fancy on my iPhone and iPad. On Mac, Solarized Dark suits the bigger screen. You’ve got plenty of skin options if you go pro — more on that later.
Extras
But things get fun when you leave the app, too. The developers thought of everything. It’s pretty much as integrated as a 1st party app. For instance, I set up a Siri Shortcut to dictate a new Bear note using Siri. When I suddenly come up with a bug fix while cooking, this is a lifesaver. In addition, I have a shortcut to record a post idea from my clipboard — useful when something I’m reading inspires me. Just copy some text, run the shortcut, and the idea is stored in Bear under “#posts”. The whole process takes three seconds. Not all note apps can do this.
And the Bear widgets on my iPad Pro’s Today view? Remarkably useful. I set it up to show my most recent notes. So with one tap, I can pick up where I left off. Right from the home screen. No digging through categories. No filtering. This is what notes should be: Natural. Accessible. An extension of your own mind. Bear gets as close to that as I’ve ever experienced.
I mean, you can even access your local Bear Notes database on macOS. For the non-programmers out there, this means you have infinite automation options for manipulating your notes. Granted, most of you won’t need this. But hey, it’s an option.
Pricing
Finally, let’s talk cost. Bear Pro will cost you $1.49 per month. If you don’t have commitment issues, unlike me, you could pay roughly $1.25 per month, billed annually. Whichever option you prefer, just buy it. This pricing is absurd. I’m not sure how the good folks who make this are turning a profit. A dollar and a half per month? You can find that walking down the street.
The free version doesn’t have sync. And themes are rather limited. If you’re a light user, I guess you could make it work. But it’s a deal-breaker for me. I use this every day. All the time. For hours. To do everything. They could charge ten times their current price, and it’d still be a bargain. It helps that I’m sold on the brand, too. I mean, what developer offers free themed wallpapers? Well-played, Bear. You got me there.
In summary, I can’t recommend Bear enough. It’s simple if that’s all you need. But when the rubber meets the road, it keeps up. Value for money, capable, and pretty. What else do you need? Migrating from your existing notes app is trivial, too. So there’s not much stopping you.
Alternative: Drafts
One alternative I’ve heard about is Drafts: Similar editing, similar organization, just five times uglier. Can’t stand it. Not enough negative space in the layout of the editor. The menus look messy. Too distracting. Overall, not as consistent as Bear. I need serenity in my idea sanctuary. But if you’re less pedantic, why not give it a shot. | https://medium.com/swlh/best-pro-productivity-creative-apps-crypto-trader-blogger-2021-1aafbdefa919 | ['Mika Y.'] | 2020-12-22 08:21:01.713000+00:00 | ['Business', 'Work', 'Technology', 'Productivity', 'Startup'] |
1,489 | What’s in 2021: The Technology Landscape Of The Future | It didn’t take long for the optimism of a new decade to wear off. Since the beginning of 2020, companies everywhere were reeling as they reckoned with the effects of the COVID-19 pandemic. Sadly, the impact of the virus was too much for many firms, leaving millions of workers unemployed and driving thousands of businesses to close their doors. Those companies that stayed afloat had to act quickly in order to enable their remote workforce and maintain operations. Heading into 2021, there is little precedent for projecting the future. The economy is showing some signs of stability, but there are lingering fears over continued challenges or further surprises. The technology landscape is certainly going to change and we will dig deeper to see how much the world has affected it.
Through all the confusion, though, there are still some basic concepts that will shape the year to come. Digital operations are more important than ever, with many transformative changes accelerating over the past year. The influence of technology is massive, forcing new approaches to regulatory behavior.
As the industry emerges from a chaotic year, it will begin a rebuilding phase, but this rebuilding goes beyond restoration. There is little opportunity to return to the old way of doing things. Thanks to changes that no one would have wished for and fueled by the requirements of a digital society, the technology industry will doubtless take a new shape in the coming year. Here are some trends that C-suite shall pay attention to strengthen their businesses.
Technology landscape is going to follow a consistent development
In 2020, the global information technology industry took a small step back in terms of overall revenue. As of August 2020, the research consultancy IDC was projecting global revenue of $4.8 trillion for the year. While the tech sector fared better than many other industries during the pandemic, it was not immune to cutbacks in spending patterns and deferment of major investments.
Moving forward, IDC projects that the technology industry is on pace to reach $5 trillion in 2021. If this number holds, it would represent 4.2% growth, signaling a return to the trend line that the industry was on prior to the pandemic. Looking even further into the future, IDC expects the pattern to continue, estimating a 5% compound annual growth rate (CAGR) for the industry through 2024.
The US is the largest tech market in the world, representing 33% of the total, or approximately $1.6 trillion for 2021. However, despite the size of the U.S. market, the majority of technology spending (67%) occurs beyond its borders. Spending is often correlated with factors such as population, GDP and market maturity.
The bulk of technology spending stems from purchases made by corporate or government entities. A smaller portion comes from household spending, including home-based businesses. With the blurring of work and personal life, especially in the small business space, it can be difficult to precisely classify certain types of technology purchases as being solely business or solely consumer.
There are a number of taxonomies for depicting the information technology space. Using the conventional approach, the industry market can be categorized into 5 top-level buckets as the following diagram.
>> Read more: IoT Challenges — What are the greatest and How to deal with? <<
The allocation of spending on each category will vary from country to country based on a number of factors. In the mature US market, for example, there is robust infrastructure and platforms, a large installed base of users equipped with connected devices, and available bandwidth for these devices to communicate. This paves the way for investments in the software and services that sit on top of this foundation.
Tech services and software account for nearly half of spending in the U.S. technology market, significantly higher than the rate in many other global regions. Countries that are not quite as far along in these areas (such as Vietnam) tend to allocate more spending to traditional hardware and telecom services.
Building out infrastructure and developing a broad-based digital workforce does not happen overnight. Scenarios do exist, however, whereby those without legacy infrastructure — and the friction that often comes with transitioning from old to new — may find an easier path to jump directly to the latest generation of technologies.
On the upside, technology firms are planning to capitalize on the ongoing digitalization of business, whether that is expanding engagements with their current customer base or reaching into new segments. Additionally, technology firms are applying lessons learned from a challenging year and placing the spotlight on their internal operations, including sales and marketing efforts.
The enormity of the industry is a function of many of the trends discussed in this report. Economies, jobs, and personal lives are becoming more digital, more connected and more automated — a trend that is only accelerating after recent events. The platform for computing has become much more stable, with access to technology no longer limited by location or constrained to certain activities. As a result, more energy is pouring into creative solutions, further expanding the opportunities for both IT professionals and IT channel firms.
The Business Of Technology Positioned For Greater Influence
Companies in the business of technology (the channel) are also facing a host of opportunities, challenges and changes in the year ahead.
At a general level, many things haven’t changed for the channel. Technology and the business of selling it continues to grow more complex. What was once a fairly stable set of infrastructure products in a channel provider’s portfolio has, in the cloud age, morphed into myriad choices around software-as-a-service applications, data tools and a stack of emerging technologies.
Looking ahead to 2021, firms that manage to thrive will be making investments in employee skills training, expanding their market reach to new customers and verticals, partnering with potential competitors, and embracing emerging tech.
To ensure positive growth in 2021, channel firms are first looking to rely on what they already have. According to a recent research, the #1 factor that will possibly move the needle optimistically next year is a pick-up in business from existing clients. This makes sense in the current economic climate where finding new customers may be challenging, considering that many of the SMB customers the channel targets have been existing in cost-cutting or frozen-budget mode during the pandemic.
That said, mining existing customers for additional revenue rings especially true with managed services providers for whom upselling additional types or tiers of services is often the key to growing revenue and profit margin.
Other areas where firms place value include reaching new customer segments. Opportunities there range from enticing new clients with any of the emerging technology solutions on the market to offering up a specialization in a particular industry vertical.
The effect COVID-19 has had on business in the channel is undeniable, with roughly half of companies reporting some downside impact in the past year. Smaller firms are more likely than larger to have been affected, given that their customer base also tends to be in the SMB space, a demographic most negatively hurt by business lockdowns and restrictions on capacity.
However, a study by McKinsey also shows that the pandemic has accelerated digitalization efforts by many companies, especially as they relate to interactions with customers. This phenomenon, along with the aforementioned need to support the shift to remote work, provides some clear avenues of opportunity for channel firms looking to assist customers in these efforts.
>> Read more: How to save your business from the Coronavirus <<
Budget Allocations Takes Technology As Central
One of the key questions to think about heading into the technology landscape of 2021 is how to budget. More than last year? Less? The same? Given the volatility of 2020, annual budgeting is a top of mind issue for many channel firms as they attempt to forecast sales, ponder new markets, or fight to keep their business afloat.
Interestingly, 31% of channel firms feel that the budget for tech support within their organization is too low.
This is somewhat puzzling given that these are technology firms whose presumed mission is to provide support, technical advice and consulting to customers. One would assume this would be a highly funded area. It’s possible that these responses reflect the impact of job cuts due to the pandemic, or, more positively, are indicative of a rise in customer demand that has pinched tech support teams’ capacity, leading to a call for more headcount or other resource help.
Along with budgeting comes the human factor: staffing, skills gaps, etc. Despite current pandemic conditions, many channel firms are nonetheless on the hunt for new employees, particularly those with skills in certain areas such as emerging tech (IoT, AI, etc.) as nearly 4 in 10 firms report having current openings they are actively looking to fill in technology roles.
That said, the technology focus, and type of customers served will greatly influence the staffing needs of the typical channel firm. For example, a managed services provider may well need more tech staffing knowing that a majority of its customers are working from home instead of a central office location. Other companies are grappling to fill staffing holes because some of their employees needed to leave the workforce to tend to school-age children at home during the pandemic. This may lead to a surge in demand for outsourcing and solution provider services.
>> Read more: Vietnam — IT outsourcing heaven for tech dominants <<
Employee technological skill focuses go at a deeper level
Human factors as mentioned are a core concentration which will have a big impact on the landscape of technology in 2021. With that, the IT workforce will keep evolving from a heavy focus on infrastructure and generalists into a diverse world of specialists spread across 4 pillars: infrastructure, software development, cybersecurity and data as the pillars supporting IT operations. The details are carefully researched and reported by CompTIA:
1. Infrastructure
Most companies look to be getting back to the infrastructure basics in the year ahead. This has been true in the past as well, but this year the demands around fundamental pieces are even more pronounced given the focus on resilient systems. As companies place more of their IT architecture in the cloud and consider new options for their workforce, networking performance becomes more critical.
Other backend components such as server administration and storage are also part of a broader modernization of the IT building blocks. Nearly half of all companies are placing focus on first-line support, proving that the help desk has still not become a commodity in an age of outsourcing and end-user tech savvy. Although the emerging area of IoT was not a major priority last year, it still takes a small step back as companies concentrate more on core operations and less on advanced techniques.
2. Software Development
Whether dealing with internal stakeholders or external customers, companies continue to emphasize user experience. The app approach that became widespread with the explosion of mobile devices redefined expectations around software usability, and many companies are still climbing the learning curve.
The need for quality assurance is tied to the speed of development cycles, as organizations are trying to accelerate their processes without disrupting workflow. This acceleration leads to an overall focus on DevOps, which sees a significant increase in attention over last year. Although infrastructure projects may command more time and energy in 2021, the general direction will be toward more investment in software development for customization and automation.
As companies build out their software capabilities, they will drive more interaction between software development and infrastructure operations. The lack of focus on AI and mobile development is less due to a shift away from long-term projects and more due to the fact that many companies do not need these specializations as core competencies.
3. Cybersecurity
Cybersecurity is possibly the most complex of the 4 pillars, covering expanded defenses that companies must build, innovative approaches to proactively test those defenses, and internal processes that create secure operations. It is somewhat surprising to see such a high focus placed on compliance for the coming year.
While the technology industry is heading toward more regulation, many companies have been somewhat slow to fully embrace compliance processes. Workforce education also moved up in the list of priorities.
New concerns that stem from a remote workforce have been a primary trigger for both security awareness education and security investments. Risk analysis, cybersecurity analytics, and penetration testing are all areas that need improvement as companies adopt a zero trust mindset. Cybersecurity metrics rank lowest for the coming year, which signals the ongoing challenge in bridging the gap between cybersecurity best practices and business health.
4. Data
The field of data is not set up to be a dedicated function as often as cybersecurity, but it is still a field where businesses are trying to establish comprehensive policies and management. Database administration is still the top focus area heading into 2021, as many companies continue to move away from spreadsheets and other simplistic forms of data management. The emphasis on data management and policies shows that businesses are beginning to take a more comprehensive approach to their data, which in turn will drive more specialization.
While data visualization and predictive analytics have relatively strong demand, those areas are still difficult to tackle without a holistic data management strategy. As far as cutting-edge technologies, distributed ledgers such as blockchain have tremendous potential, but there are still hurdles in implementation and the technology will most likely remain a degree separated from most end users. The Technology Landscape from the start can be identified as inconsistent since many factors are going to changed or somewhat need to be discarded completely so that new ideas can come into life.
Visit our Page
In case you are looking for a partner that has those critical skills to leverage your 2021 tech initiatives, feel free to contact us — a tech consultant who has been in the industry for 11 years and has been verified for a creative mindset, strong commitment, and outstanding skills. We promise to not only deliver the best social app ideas to accelerate your business but are also capable of translating those initiatives into a seamless and competitive final product.
Contact us via: | https://medium.com/@savvycom/whats-in-2021-the-technology-landscape-of-the-future-88ab45126186 | ['Savvycom Jsc'] | 2020-12-10 04:08:50.563000+00:00 | ['Technews', 'Technology', 'Savvycom'] |
1,490 | Here is Our Review of Decentralised Exchanges Built on 0x | You have to select the token pairs, specify the amount you want to buy/sell to create your order. You can specify a time limit to your order after which the order expires. It provides with candlesticks, order book, etc like most exchanges.
ERC dEX has a strict verification process to list tokens, this measure will protect traders to a certain extent. They also have taken certain measures and initiatives to promote high liquidity. However, our order on ERC dEX took a few days to fill.
They have built a trading toolkit called Aqueduct, it provides APIs developers can use to automate trading. They also have a flexible program that incentivizes partners who can promise them liquidity.
They also have a market-maker program which traders with no coding experience can use to automate trading and backtest strategies. When compared to other decentralized exchanges, ERC dEX has put in more efforts to ensure liquidity.
ERC dEX is built by a strong team with great industry experience. The CEO, David Aktary has worked with companies like IBM, JP Morgan, etc. The CTO, Luke Autry was part of Sharefile which was later acquired by Citrix. We noticed one more thing, all the c-level employees in the team are coders and have maintained a decent Github profile.
How do you plan to stand out with respect to other relayers in the market?
We are making the global shared liquidity concept a reality with our Aqueduct network. This network incentivizes everyone in the ecosystem to share liquidity as far and wide as possible. No other relayer has this.We also will be a compliant trading venue for tokenized securities I don’t think any other relayers are doing this either.
What are the challenges you face as one of the pioneers in the decentralized industry?
Anyone that doesn’t answer this question with “liquidity” is wrong. Liquidity is the biggest problem with all decentralized exchange. We’re tackling it and making good progress, but it’ll take time.
Tell us about your team and what makes you special?
We’re unique in the 0x relayer space in that our team not only comes from a software engineering background, but we also have deep finance and business experience and connections. This helps us in many ways.
What are your challenges in acquiring, educating and retaining customers?
We have a couple of challenges here. Liquidity begets liquidity — users will go wherever that liquidity is, so we have a bit of a starting problem. As I said before, we’re solving that, but it takes time. We’re US-based, dealing in a business where the regulations may be a bit unclear. We’re doing everything we can to be as compliant as possible, but doing so may mean sacrificing taking an action that would otherwise bring us a large infusion of users. This is a sacrifice we’re willing to make to realize our long-term goals.
Do you plan to bring additional products to the market?
Absolutely. We currently have 3 products in the market: our ERC dEX relayer for token spot trading, our Aqueduct liquidity network, and our Market Maker Automation Toolkit. We know we’re going to be adding compliant security token spot trading, but we may also add products using other protocols. Some we’ve been seriously considering include dY/dX, b0x, and others.
How do you plan to offer liquidity?
We have over 80 members in our liquidity network, including large market makers like Hehmeyer Trading and other relayers like Bamboo Relay, Shark Relay, and Amadeus Relay. We are the only relayer that collects fees and uses those fees to incentivize liquidity providers. We do this through our Market Maker programs. We have two levels to this program, our institutional program and our Designated Market Maker program, which is intended to democratize the market making process. I recently published a blog on this and the benefits of signing up, with as little as $100 committed. We’ve had nearly 60 people sign up in the few days after that blog was published.
Have you guys closed any rounds of funding?
We have mostly bootstrapped to date, but are now about halfway through a seed round. Interested investors should reach out to finance@ercdex.com.
Paradex
Paradex offers a decent trading experience. You can begin by connecting your wallet. You need to select the token pair by click on the market button on the top bar. They have a limited number of tokens to trade. | https://medium.com/onchainlabs/here-is-our-review-on-decentralised-exchanges-built-on-0x-5afc9b107aec | ['Febin John James'] | 2018-06-27 04:04:26.195000+00:00 | ['Bitcoin', 'Cryptocurrency', 'Innovation', 'Technology', 'Ethereum'] |
1,491 | Want to become a Data Scientist ? | So, read the interesting journeys of three successful data scientists to gain inspiration and lessons to excel in data science industry. ✌️
By : Fatemeh Renani ,Mohammad Mazraeh, Jaskaran Kaur Cheema
Infographic : Jaskaran Kaur Cheema
“Torture the data, it will confess to anything”-Ronald Coase
Due to the enormous generation of data, modern business marketplace is becoming a data driven environment. Decisions are made on the basis of facts, trends and analysis drawn from the data. Moreover, automation and Machine Learning are becoming core components of IT strategies. Therefore, the role of Data Scientists and Data Engineers is becoming increasing important.
In this blog, we have enumerated the journeys of three Data Scientists who have different educational backgrounds and career paths but have successfully curved a niche for themselves in the Data Science Industry.
We hope that their journeys will inspire you to excel in data science industry.
MANROOP KAUR, Data Engineer ICBC
Manroop Kaur, is a Data Engineer at ICBC Vancouver. She is a graduate of SFU’s Professional Master of Science in Computer Science program Specializing in Big Data.
Can you tell us about ICBC and your current role.
ICBC was built in order to provide basic insurance and managing claims which is the core component of the company. At present, the company is working on RAAP (Rate Affordability Action Plan (RAAP) project that will fundamentally change its business model to create a sustainable auto insurance system which would provide more affordable and fair rates for all. As a part of this project, I am working as a Data Engineer in Claims and Driver Licensing Teams in Information Management Department.
What convinced you to venture in to the Big Data field.
While working with Tech Mahindra, I heard about a project where data was being transferred from traditional database to Hadoop. This was the first time in my life I came across big data terminology and started exploring it by reading online articles. Since I already wanted to expand my education qualification, so I thought of venturing into this field. SFU’s Professional Master’s program was perfect fit so I applied and got accepted into it.
Can you describe your career journey after enrolling in Big Data program
While at SFU, I did my coop with WorkSafeBC. My work focused on Text analysis, doing advanced analytics and applying Machine learning algorithms. After that I applied at ICBC and it’s been a year of working as a Data Engineer with ICBC.
Any courses that you recommend to pursue to be successful in this program.
I believe that Big data program at SFU is structured so well that if you complete the assignments of Programming Lab 1 and 2 diligently, there is no requirement of any other course.
Can you describe any of your most interesting project.
I remember doing a project during internship of detecting the likelihood of claim to be fraudulent. We analyzed the claim data of past 5 years. Regular meetings with real field investigators were held to know about the red flags. Later, data was analyzed using those red flags. This project taught me that in academic setting we focus on obtaining high accuracy but sometimes in real life problems accuracy has different definition. So, the model that data science team was preparing would be termed successful if it was able to detect even 40 out of 500 claims to be fraud which are actually in real life.
Any interesting lesson that you learned after working in this field .
So, when I started learning about data science, I used to get very excited about applying ML algorithms to see the output of my model without spending much time on analyzing or cleaning the data . Later, I realized that data plays vital role and preparing it takes 90% of time but as performance of model depends upon the data being fed to it, preparation time is worth the effort.
How do you reflect on your decision of enrolling in this program.
I think decision of acquiring Master’s Degree in Big Data at SFU has proved to be worth my time and resources I invested in it. As it not only provided me the education in concurrent with the industry requirements but also has helped me securing a good job.
Any advice for people who wants to venture in this field.
I think focusing on one domain rather than doing everything in data science and updating your skills regularly will lead to a successful career. | https://medium.com/sfu-cspmp/want-to-become-a-data-scientist-ed309bdcc738 | ['Jaskaran Kaur Cheema'] | 2019-03-15 23:37:58.851000+00:00 | ['Life Lessons', 'Interview', 'Data Science', 'Education', 'Technology'] |
1,492 | Why is Zipcar not a thing of the past ? | Written by Gwanygha’a Gana and Kummar Gaurav Singh— July 25, 2020
With the rise in popularity of car sharing apps like Uber and Lyft, the utility of food and grocery delivered at your doorstep during the ongoing pandemic and the never-ending buzz around autonomous vehicles, it begs the question; why is Zipcar still around and what does the future hold for it?
Zipcar offers on-demand car sharing. As a member, you can book a car by the hour or day to go where you want when you want. Zipcar covers the cost of gas, secondary insurance, parking and maintenance. The fleet comes in all sizes and includes compact sedans, SUVs, spacious vans and much more. There are three kinds of membership for Zipcar; Monthly, Annual and Weekday.
Monthly membership is a $7 per month option which gets you driving rates from $8.50/hr and $76.75/day.
Annual membership is a $70 per year option which gets you driving rates from $8.50/hr and $76.75/day.
Weekday membership starts at $249 per month. You get a car to use during the week. (isn’t that cool!)
From it’s $1B valuation in 2011 to it’s ~$500M sale to Avis in 2013, the diverse car sharing options provided by Zipcar have allowed it to be a transportation option for users around the world. In order to complement its car rental business, Avis ponied up $500M to acquire Zipcar which is now part of the Avis Budget group.
The rise of ride hailing apps like Uber and Lyft have transformed the car-sharing industry for good. Today most riders use these ride hailing apps to move around in a quick and efficient manner without having to worry about returning a car to its parking spot. While this has affected Zipcar’s business, there are situations where the Zipcar model still makes sense.
Scenario 1 — Airport ride
Let’s take an example of a user located in Seattle who needs to run errands for 90 mins e.g. pick up a friend from the airport and pick up dinner on the way back.
For these kinds of trips where the user does a round trip and needs to return to it’s base location within a few hours, renting a Zipcar makes economic sense over an Uber or Lyft. However, if there is a prolonged stop which stretches the time to 3–4 hrs, the economics start to change. One can argue that given unpredictable post-landing times (like baggage retrieval) or even arrival/departure times for most delayed major airports such as Chicago, it’s not always easy to optimize the 90 minutes Zipcar booking window. However, in such scenarios, the added flexibility of adding time to the current trip at a prorated basis makes Zipcar still worth trying.
Scenario 2 — Airport ride with stop
In this case (airport ride with stop), the cost of rideshare and Zipcar are much closer to the point where an extra hour with Zipcar may cause the user to go with the rideshare option for more flexibility.
There are other scenarios which still exist where the services Zipcar’s do offer, make it a very suitable and flexible option such as moving using Zipcar vans and Road trips.
Scenario 3 — Daily commuters
Let’s analyse a user who uses rideshare to and from work every week day (assuming a 10 min, 3 mile, shared rideshare or regular rideshare) , which in Seattle could cost approximately $7-$12 depending on the time and other factors. (Excluding time of day, traffic/surge for complexity)
In this situation, a rider who uses a shared rideshare option stands to save over the weekly Zipcar option. Riders who prefer to be alone in their cars are better off using the weekly rental option from Zipcar. Note — additional complexities around parking could sway riders to go one way or the other.
If the price of rideshare were to go down by 30–40% then the Zipcar weekly rental business does not remain viable. However, the flexibility they provide users who are running errands and might need car space to do that still gives these user a reason to use Zipcar.
Cheap rideshares are going to be things of the past as the prices are on an upward trajectory, which is good news for Zipcar. The COVID19 pandemic has affected rideshare companies and Zipcar. However, the rise of grocery delivery services as well as its consolidation with rideshare (see Uber-Postmates deal) can have some interesting consequences. This could lead to vertical integration and massive network effects which might cannibalize Zipcar’s revenue. Add into the mix delivery robots (Nuro, Refraction.AI or prime scout) and autonomous vehicles/robotaxies significantly reducing the rideshare/delivery costs, and you have serious challengers to the attractiveness of Zipcar.
Given it’s value proposition to consumers and to city governments, Zipcar is here to stay in the short term. It’s long term viability still remains in question if rideshare costs go down. | https://medium.com/techwheels/why-zipcar-is-not-a-thing-of-the-past-ddd0ba011a2e | ['Gwanygha A Gana'] | 2020-07-26 03:09:37.138000+00:00 | ['Mobility', 'Zipcar', 'Transportation', 'Ridesharing', 'Technology'] |
1,493 | Visiting Cleanshelf’s new Ljubljana offices | Two weeks ago our team was invited to spend an afternoon at an exciting Slovenian tech start-up called Cleanshelf. We were warmly welcomed by the VP of engineering Jošt Novljan and the CEO Dušan Omerčević at their brand new Ljubljana offices and then kicked off the tour with a short presentation of Cleanshelf’s product. The latter tackles the issue of managing a large number of SaaS (software as a service) subscriptions by optimising spend and ROI through identification of inactive subscriptions & accounts, overlapping licenses, and providing a central hub for all your SaaS applications through their app, so that your company can make informed decisions and get the most for their buck. As you can imagine employees nowadays use a large variety of different tools so their product comes in quite handy. And since they were the OG innovator in the space, they’ve now become the go-to SaaS management solution with their comprehensive list of integrations with leading financial, HR and single sign-on systems. Their product is currently being used by a variety of large companies such as AT&T, Harry’s, Prezi, Avant2Go and many more. | https://medium.com/fri-usa-tour/visiting-cleanshelfs-new-ljubljana-offices-25ae188b3523 | ['Damjan Kalšan'] | 2021-04-06 07:42:19.930000+00:00 | ['Company', 'SaaS', 'Technology'] |
1,494 | Bad Parts of JavaScript — Arithmetic and Objects | Photo by Maria Teneva on Unsplash
JavaScript is one of the most popular programming languages in the world. It can do a lot and have some features that are ahead of many other languages.
In this article, we’ll look at bad parts of JavaScript that we should avoid, including some math features and objects.
parseInt
The global parseInt function has a problem. It converts a string into an integer.
However, it doesn’t have to be a string that only contains a number for parseInt to work.
That’s bad because of the confusion it causes.
For instance, if we have:
parseInt("6")
that returns 6 as we expect. But if we have:
parseInt("6 grams")
Then we also get 6 returned.
That definitely causes confusion for many people.
The rule for parseInt is that if the first character of the string is 0, then the string is evaluated as base 8 instead of 10.
Base 8 and 9 aren’t digits, so '08' and '09' will be converted to 0 with paeseInt .
However, we can provide the radix as a second argument to convert them to decimal:
parseInt( '08', 10)
Number.parseInt doesn’t have these issues, so that we should use that instead.
+
+ can both add or concatenate. It depends on the type of the operands.
If either operand is an empty string, then it converted the other one into a string.
If both are numbers, then their sum is returned.
Otherwise, it converts bot operands to string and concatenates them.
This is a great source of bugs. Therefore, we should convert them both to numbers or strings.
Floating Point
Floating point numbers are numbers that handle decimal fractions badly.
0.1 + 0.2 doesn’t equal 0.3 .
Therefore, we should convert each operand of an arithmetic operation to the scale that we want before operating on them.
For instance, if we want whole numbers as a result, then we should convert them both to whole numbers before doing the arithmetic operation that we want.
NaN
NaN is an IEEE standard value for representing things that aren’t numbers.
However, in JavaScript, we have:
typeof NaN === 'number'
returning true .
NaN is the returned value when we try to convert a non-numeric string to a number.
If we have one or more NaN in an arithmetic expression, then NaN will be returned.
We can’t use the === to check for NaN since NaN isn’t equal to itself.
For instance, NaN === NaN returns false and NaN !== NaN returns true .
Therefore, we should use the isNaN function in the JavaScript library to check for NaN .
isNaN(NaN) returns true .
isNaN also returns true if we pass in anything that’s not a number, so:
isNaN('foo')
also returns true .
isNaN will try to convert anything passed into the function to a number before checking.
If we don’t want that, we can use Number.isNaN to compare the value as-is.
Phony Arrays
Arrays are just objects in JavaScript, so the typeof operator won’t work for checking arrays.
typeof [1, 2] returns 'object' for example.
Instead, we can use the Array.isArray method to check if something is an array.
Falsy Values
JavaScript has a large set of falsy values.
0, NaN , '' (empty string), false , null and undefined are all falsy.
They’re all falsy, but they aren’t interchangeable.
Therefore, we should use the === operator checks for values to avoid any issues with different kinds of falsy values.
hasOwnProperty
hasOwnProperty is a method, and it can be replaced with a method accidentally or otherwise.
Therefore, we should make sure that hasOwnProperty is actually what we’re expected to use.
Also, we can write:
Object.hasOwnProperty.call(foo, 'bar');
where foo is an object is 'bar' is the property name that we want to check for.
This way, we don’t have to worry about the hasOwnProperty method being overwritten.
Photo by Connor Botts on Unsplash
Object
JavaScript objects aren’t usually empty since they at least inherit from Object.prototype .
This may matter because we may be calling the methods from an object’s prototype accidentally.
Conclusion
parseInt is a function that may have issues if we use it since it tries to convert non-numeric strings into numbers.
+ also tries to convert the types of its operands, so we should change all operands to the same name. | https://medium.com/swlh/bad-parts-of-javascript-arithmetic-and-objects-185a88309aee | ['John Au-Yeung'] | 2020-06-07 19:08:32.032000+00:00 | ['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development'] |
1,495 | The Quest for an Ultimate Theory of Gravity | Without gravity the night sky would look very empty. Stars, galaxies, moons, and planets — none of these could exist without gravity holding them together. Neither, for that matter, could the Sun or Earth. It is gravity that pulled together diffuse atoms and built the universe that we see around us today.
The glory of the night sky, all thanks to gravity. Credit to ESO.
Gravity has also been at the heart of our own efforts to understand the nature of reality, from Newton’s formulation of the universal law of gravity to Einstein’s theory of relativity. Today gravity lies at the centre of new problems in science, and holds the key to uncovering what may be the ultimate theory of physics.
For something that has had such a profound effect on humanity, gravity is surprisingly weak. Of the four fundamental forces known to physics, gravity is by far the weakest. This weakness can easily be demonstrated — the magnetic force of a small fridge magnet can easily lift a pin into the air, thus defeating the gravitational force of an entire planet.
The weakness of the gravitational force means it can be almost completely ignored at the level of atoms and molecules. It is only when we look at very big objects — planets, stars and galaxies, that gravity starts to matter. While the other fundamental forces fade away over short distances, the force of gravity can be felt from one side of the galaxy to the other.
Humans have known about gravity, or at least the effects of gravity, since the most ancient times. The basic fact that things fall down when dropped is known to everyone, and would have been obvious even in the Stone Age. Ancient civilizations in Greece and India developed a basic understanding of the nature of gravity, and knew that falling objects accelerate, but were unable to apply these ideas to the wider universe.
It was not until the Renaissance, and the days of Galileo, Kepler and Newton, that a truly scientific analysis of gravity was made. Experiments by Galileo demonstrated the counter-intuitive fact that all objects, regardless of how heavy they are, accelerate at the same rate when falling. And Kepler, working with observations of the planets, developed laws describing the motion of the planets around the Sun.
In 1687 Isaac Newton published a book, Philosophiæ Naturalis Principia Mathematica, summarising his three laws of motion, and the universal law of gravitation. For the first time gravity was expressed mathematically, and with this Newton was able to show that Kepler’s laws arose from a simple equation describing the force of gravity. Gravity was revealed as the force that shaped the universe, governed the motion of the stars and planets, and gave us seasons, tides and falling apples.
Over the next two centuries scientists built on Newton’s laws to develop what is now known as Classical Physics. These scientists had tremendous success in describing and predicting the natural world. Astronomers, noticing discrepancies in the orbit of Uranus, were able to use classical physics to predict the position of an eighth planet further out in the Solar System, predictions that led to the discovery of Neptune in 1846.
Neptune’s existence was first predicted by deviations in the orbit of Uranus. Credit to NASA.
Despite these successes, cracks were forming in our understanding of gravity. The orbit of Mercury also did not follow predictions, and astronomers searched fruitlessly for another planet to explain this strange behaviour. And although scientists could explain the effects of gravity, nobody could understand quite how stars and planets millions or billions of miles away could exert a force on the Earth.
Answering these questions required a revolution in physics, and the early twentieth century brought one. In just a handful of years Classical Physics was swept aside by two new theories — Quantum Physics and Relativity. While quantum theory was mostly associated with very small things, and could therefore largely ignore gravity, relativity became intimately linked with gravity.
Einstein’s famous special theory of relativity concerns the speed of light, and the behaviour of objects moving close to the speed of light. As first formulated by Einstein this had little to do with gravity. However, in the years following the publication of the special theory of relativity, Einstein developed a more general theory. This theory, appropriately known as the General Theory of Relativity, described the behaviour of gravity more accurately than Newton’s theory, and not only solved the problems with the motion of Mercury, but predicted a whole host of exotic objects — black holes, wormholes and even the possibility of time travel.
The first image of a black hole, demonstrating the accuracy of Einstein’s theory. Credit to EHT.
The general theory of relativity is an extraordinarily beautiful theory. In classical physics the ideas of matter, motion, space and time are all thought of separately. Space acts as a stage, upon which matter can act. Time is simply a clock, ticking away, allowing matter to move through the stage. But in relativity these ideas are united. Space and time combine into a single entity, spacetime. The presence of matter distorts spacetime, and as matter moves through both space and time, those distortions in turn affect the motion of both matter and light.
The predictions of general relativity were first confirmed by Arthur Eddington in 1919. Physicists studying the complex equations soon found solutions pointing to the existence of black holes — objects so massive that they distort space in such an extreme way that light itself cannot escape. Treated at first as just a mathematical curiosity, the idea of black holes gradually gained acceptance over the following decades. Other solutions to Einstein’s theories have also been proposed, suggesting that bizarre objects such as wormholes, linking distant regions of space through higher dimensional space, or even closed loops of time, are possible.
Despite the revolution in physics, problems with gravity still persisted. Our understanding of the size of the universe changed radically in the early twentieth century when other galaxies were identified for the first time. But observations of these galaxies revealed that they did not spin at the speed predicted by our theories of gravity. Physicists have tried to solve this problem by invoking “Dark Matter”, theorised to be some kind of almost invisible particle that adds additional mass to galaxies. Dark matter cannot be seen by our telescopes, and despite many years of searching we still don’t know what dark matter is. This has led some scientists to try looking for other solutions, including making modifications to the laws of gravity.
Dozens of galaxies seen in the Abell 3827 cluster. Their motion can only be explained by invoking a mysterious and so far invisible type of particle known as Dark Matter. Credit to ESO.
During the 20th Century, three of the four fundamental forces of nature were described in the language of quantum physics. In this theory each fundamental force has an associated particle — for electromagnetism this is the photon, for the two nuclear forces (known as the Strong and Weak Nuclear Forces) three particles are associated — gluons, and the W and Z bosons.
It seems reasonable to expect the fourth fundamental force, gravity, would have a quantum particle as well. This particle, known as the graviton, is actually quite well defined theoretically, but if it does exist it is extremely hard to detect. It is hard, even in theory, to design a detector that could find the graviton. Indeed, scientists believe that it is impossible to build such a detector on the Earth. Problems also arise when trying to fit the graviton in with the mathematics of the rest of quantum physics.
Building a quantum theory of gravity remains one of the key challenges of physics. The hunt for quantum gravity has led to a number of theories — string theory, M theory, quantum loop gravity, twister theory, M8 theory… Theoretical physicists have thrown out dozens of suggestions on how to incorporate gravity into the quantum world, but so far it has been impossible to determine which, if any, is correct.
The weakness of the gravitational force is the main issue. The force is so weak that its effects cannot normally be seen at the quantum scale. Only in some of the most extreme places in the universe — in the big bang or at the heart of a black hole — can quantum gravity be observed. Black holes, by their very nature, cannot be directly observed, and neither can we peer back to the very earliest moments of the universe when quantum gravity may have been present.
All this means that for now the final theory of gravity remains out of our reach. Physicists will no doubt continue to build theoretical models of quantum gravity, but until we find a way to experimentally test those theories it is unlikely we make any real progress. And without a theory of quantum gravity, some of the most bizarre and mysterious structures in the universe will remain unknowable to us. | https://medium.com/discourse/the-quest-for-an-ultimate-theory-of-gravity-faf6d6a04596 | ['Alastair Isaacs'] | 2020-10-14 20:06:11.260000+00:00 | ['Astronomy', 'Physics', 'Space', 'History Of Technology', 'Science'] |
1,496 | Status Quo of AR — Part 1. AR Technology | Facebook Livemap
AR Technology
Augmented Reality, abbreviated as AR, is the technology that overlays virtual information on top of the real one. For instance, we may see pokemon on the ground dodging the pokemon balls through a smartphone screen, or we may see the exaggerated virtual cosmic cover on faces. This requires machines to understand the environment so well that they figure out how the present information correctly to fool us. In the following sections, I will write about how machines perceive environments from low-level information to a high-level ones.
Positioning
First and foremost, for an AR device likes smartphones or glasses to display 3D virtual content in the real world, they need to be aware of their positions and orientations, which is achieved by SLAM (Simultaneously Localization and Mapping). The machine first extract visual distinct point in the environment called feature points, positions of feature points are tracked and used to infer to the camera position. Combined with IMU (Inertial Measurement Unit, used to measure orientation), the poses of a camera can be estimated.
Position Tracking
If we collect and cluster detected feature points, we can recognize some primitive geometries such as horizontal or vertical planes. After detecting planes, we can do some simple interaction with the real environment by placing virtual on top of a plane.
Occlusion
By simply estimating camera pose and primitive geometry, the occlusion issue is unsolved. In the below picture, we can track the pose of an object. But as long as a real object is in front of the virtual one, immersion will be broken. Because the geometry of the person’s legs is not densely reconstructed, the algorithm can’t decide which part of the pokemon is behind the legs to hide the occluded parts.
Occlusion
To tackle this problem, one of the solutions is to compute the depth value of each pixel by adding depth sensors like Lidar, or through an algorithm to approximate the depth values.
Depth information can also be used to construct environmental mesh and simulate physics effortlessly.
from 6D.ai
Lighting
In terms of realism, a component is still missing — the lighting. In the real world environment, there are lighting from light sources and inter reflections of objects. To display virtual items realistically, we need to reconstruct environmental lighting for rendering so that virtual objects can blend in.
Semantic Information
The above methods only extract physical information. Empowered by AI algorithms, AR devices can detect semantic information such as facial, object, or text. The high-level information facilitates applications such as facial filters or text translation.
Google Translation AR
The state-of-the-art of AR technology is the capacities of machines to recognition location. This task is called feature matching, where algorithms can determine whether two images belong to the same location and even estimated the relative pose between them.
If feature points of a specific location are stored on a database which sometimes is called AR cloud, we can query the saved feature whenever a camera looks at the same location. And the algorithm computes the relative position and recovers the virtual objects placed in the previous sections.
Location recognition coupled with AR cloud can further bridge the gap between the virtual and real world.
Imagine you can place a note in front of a door, and other people can scan the door with smartphones to retrieve the note as a reminder. This persistent experience is essential for AR as a social tool, which is the focal point of many companies.
Placenote Persistent AR
Contextual Information
Beyond existing technologies, imagine machines can understanding arbitrary context. Your AR classes can understand the layout of your home and display information at the hall to remind you to bring an umbrella or recommend your diet choices or health information at kitchens or refrigerators. Integrating digital information from high-level context to low-level poses, Facebook Reality Labs envisions LiveMaps as a representation of the real world, which could radically change how humans behave in a meaningful manner with AR technology. | https://medium.com/immersive-media/status-quo-of-ar-part-1-48fd611964fa | ['James Zhang'] | 2020-12-16 18:04:04.426000+00:00 | ['Facebook', 'Mobile', 'Google', 'Augmented Reality', 'Technology'] |
1,497 | Razer Kaira Pro Wireless Xbox Gaming Headset Review | Razer Kaira Pro Wireless Xbox Gaming Headset Review
2020’s Best Xbox Wireless Headset
Photo taken by the author.
NOTE: Razer graciously sent me a final retail unit of this headset to review alongside marketing assets and technical information. They also took the time to chat with me in a short video call about the design of the product. No money changed hands and I had full editorial control over this article.
As per my reviews policy, this article will never be monetized, but other additional content about this gaming headset, such as comparison articles I write in the future, might be. My posts contain zero affiliate links as I don’t personally believe in the practice.
Once again, Razer has surpassed my expectations with a surprising new headset. The Kaira Pro is an exciting new design that packs in all the features I’d expect from a premium gaming headset, and it’s also a far cry from the market trend of recycling an old model with Xbox connectivity shoved in.
Xbox Wireless support is still not a common feature in the gaming headset world. Microsoft based the wireless system in both the older Xbox One and newer Series X|S consoles on Wi-Fi Direct. Their proprietary protocol means that companies need to pay a licensing fee and go through a certification process in order to release a wireless headset for Xbox consoles. There’s two typical design routes that tech companies can choose from. They can either license a secondary USB dongle as a virtual Xbox controller (as controllers have audio support built-in), or go the tougher route and integrate the Xbox Wireless hardware directly into their headset.
In order to help mitigate these extra licensing costs, companies will often recycle an existing headset design for their Xbox version. They also sometimes pass the licensing costs on to the consumer, which inflates the price of Xbox headsets compared to PlayStation or PC models.
Official marketing image provided by Razer.
Razer did things differently with the Kaira Pro, their new Xbox headset that sells for $149.99 (official site here). This is a brand-new design, based in part on the excellent foundation of the BlackShark V2 series and the Razer Opus. It’s a closed-back design with both Xbox Wireless and Bluetooth 5.0 connectivity, Chroma RGB lighting that’s customizable with a new Xbox app, a detachable boom microphone, and a second built-in microphone for things like taking calls.
The Kaira Pro charges over USB-C, and Razer says it’ll take about four hours to charge a completely dead battery. The USB-C port is a bit recessed into the ear cup, and the divot is more square-shaped than most of the USB-C cables I own, so you may have to use the included cable to charge it. It’s a nice braided cable, similar in quality to the one included with the Xbox Elite controller. Battery life is rated at 15 hours with lights on and 20 hours with them off…and I consistently beat those numbers by a few hours in my testing, so that’s great.
Official marketing image provided by Razer.
Sound is handled by Razer’s “TriForce” drivers, which use a triple chamber design and are coated in titanium. These same drivers are inside the BlackShark V2 and BlackShark V2 Pro, and a non-titanium-coated version was used in the BlackShark V2 X (one of the year’s best budget headsets). The design allows Razer’s engineers to fine-tune the sound of the drivers more precisely, and the results are remarkable.
Bass is energetic and thumpy without any hint of bleed into the rest of the audio. Mids and highs are both clean and detailed, with just a hint of extra energy up top that should help with positional accuracy. Soundstage is wider than the average closed headset, too. This is all with the Kaira Pro set to its “default” EQ, which is one of four available settings. There’s also a bass preset and an FPS preset, which you can toggle to by double-tapping the Xbox pairing button. The final preset slot is fully customizable through the Razer Headset Setup app.
Screenshot taken by the author.
That app is wonderful, allowing the type of tweaking usually reserved for PC headsets. You can set up your custom EQ, adjust the lighting effects, adjust the EQ of the microphone (which is awesome), and also activate mic monitoring. The app is available in both the Xbox and Windows 10 stores, and will sync your settings instantly.
The Kaira Pro is first and foremost designed for Xbox gameplay, and it uses a dongle-less design that syncs directly to your console just like a controller. This means that once you’re synced to a console, you can also use the headset to turn the machine on. I used mine extensively with my Xbox Series S, playing hours of games I’m familiar with like Control, Borderlands 3, and Watch Dogs Legion. The headset handled the complex soundscapes of these games with ease, whether I listened in standard stereo mode or with Windows Sonic spatial audio turned on.
Screenshot taken by the author.
I also checked them out on PC, since I have an Xbox Wireless adapter. I used them for voice chat and Borderlands 3 multiplayer gameplay with a friend as part of a regular weekly online gaming session, and he said the mic sounded wonderful and was nigh-indistinguishable from the wired mic on the Razer BlackShark V2 X. Like that model, Razer has employed their “Hyperclear” microphone here, and it sounds incredible thanks to the enhanced bandwidth provided by the Xbox wireless connection. It uses a huge-for-a-headset 9.9mm cardioid mic capsule with great background noise reduction. Here’s a quick mic test I recorded with my PC.
The Kaira Pro also provides Bluetooth 5.0 support through a separate pairing button. It doesn’t support any enhanced codecs like AptX, but it still sounds clean and crisp in this mode. You can use both connections simultaneously, if you want to bring in music and notifications from your phone, or something like that. You can also use the Bluetooth mode while on-the-go or away from your Xbox. The headset will seek an Xbox connection upon startup, but after a few minutes it’ll time out and fall back to Bluetooth mode only. It’s worth noting that if you’re in range of your synced Xbox, it will power the console on, and if the Xbox goes to sleep the headset will turn off.
Photo taken by the author.
That makes it potentially challenging to use the headset in Bluetooth-only mode if you’re just hanging out around your house. Normally that wouldn’t be a problem, but right now I live in an area under virus restrictions and I’m not spending any time in coffee shops or wandering outside my apartment. When I was synced to my PC’s Xbox adapter, this wasn’t an issue, since the adapter can’t turn on the PC, and I was able to check out the Bluetooth functions for several straight hours without my Xbox shutting down the headset. It sounds just a tiny bit more compressed, but the multi-function button responds well for the many different functions it is burdened with, and the built-in ear cup mic was clear when I tried it in a call.
Aside from two pairing buttons on the right cup, there’s also a handy game/chat balance dial. This functions only on Xbox, and allows for quick adjustment of audio balance with a beep at each extreme. The left cup has a volume knob and a mic mute switch that are both easy to find with your thumb.
Photo taken by the author.
As far as I know, this is the first Xbox headset to have full proper RGB lighting. It has all the same features you might expect from Razer’s Chroma RGB, including different effects (breathing, spectrum cycling) and a wide gamut of selectable colors. As it’s controlled with its own unique app, it won’t sync to your other Synapse-based devices, but it’s awesome to see this level of lighting control on a console headset. When the lights are off, the Razer logos practically blend into the headset, which is great for those that care about subtlety. I enjoyed leaving them on.
Comfort and build and both just as exceptional as the sound performance, and essentially best-in-class. The ear cushions use Razer’s “FlowKnit” mesh sports material to reduce heat build-up, and are filled with a nice memory foam. The headband feels a little stiff at first touch, but it perfectly spreads the weight of the headset across my head and I had no discomfort even in multiple three hour sessions. The ear cups have plenty of swivel and strong clicky numbered adjustments. I have two extra clicks of room on my large head so it should fit most heads fine, and my ears don’t touch anywhere inside the cups thanks to angled drivers and ample space.
Official marketing image provided by Razer.
Most of the headset’s frame is plastic, though there’s some prominent metal reinforcement where it counts right near the ear cup swivels and through the adjustments into the headband. I’ve had no squeaks or creaks after several days of heavy use, and I don’t expect that to change over time. The design language is more subtle than Razer’s pre-2020 headsets and headphones, aside from the green color accents. The colors perfectly match the Xbox Series X. The plastic has a nice matte texture to it, and it’s a little bit thicker and more premium than I was expecting.
This is a wonderful headset overall, and I have only one small caveat to mention that’s due to both the underlying tech and the space I used them in. As Xbox Wireless relies on Wi-Fi direct, wireless interference can cause some small issues. My apartment building is a nightmare field of 2.4ghz interference, and a handful of times while using the headset, I heard some brief static noise as it changed channels to find a cleaner signal. This didn’t happen often enough to frustrate me, and it doesn’t affect the Bluetooth connection. But if you’re sensitive to that sort of thing, be aware it can happen. Also, make sure to install the firmware update the packaging material encourages you to install!
If you don’t need the RGB and Bluetooth functions, and don’t mind a permanently attached microphone and slightly shorter battery life, Razer also sells a cheaper standard version of the Kaira for $99. It’s awesome to see them hit that low price point with this level of performance for Xbox users.
Photo taken by the author.
I’ve tried a lot of Xbox Wireless headsets over the past few years…from the bad (Stealth 700 Gen 1) to the good (Rig 800, CloudX Flight, Astro A20). This is my personal favorite so far. The Kaira Pro excels in every category. It combines excellent sound performance for gaming and music with an awesome microphone, all-day comfort, decent battery life, seamless Xbox support, a cool new app, and a solid backup Bluetooth connection. If you’re an Xbox gamer who also owns a Bluetooth device you want to listen to, it’s an easy recommendation. It’s also a good PC headset, though you will need a separate adapter, and Razer sells plenty of great alternatives for that platform that tie more directly into their ecosystem.
Between their acquisition of THX, the release of two truly great budget headsets in the BlackShark V2 X and the Kraken X, the excellent performance of the Opus, and now the premium Xbox and Bluetooth experience of the Kaira Pro, Razer has done a lot to excel in the audio space over the last couple of years. Their hard work is paying off.
I think it was really smart to release a new headset alongside the new Xbox consoles, and it’s an easy choice to go for whether you’ve upgraded to a new machine, or you have one of the older consoles and you’re looking for a better audio experience. This is the new gold standard by which other Xbox Wireless headsets will be judged going forward. | https://xander51.medium.com/razer-kaira-pro-wireless-xbox-gaming-headset-review-7f49537a1493 | ['Alex Rowe'] | 2020-12-09 22:39:18.684000+00:00 | ['Technology', 'Gaming', 'Music', 'Gadgets', 'Tech'] |
1,498 | How Machine Learning Shapes the World | Machine learning is one technique of AI.
Machine learning(ML) is used behind the scenes to impact our everyday lives and inform business decisions and optimize operations for some of the world’s leading companies as well as novel startups.
Let’s find out where ML is put to practical use. After reading this post, you will be inspired to learn how to code machine learning by yourself.
1. Spotify
To unlock the potential of human creativity — by giving a million creative artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it.
Spotify, an online music streaming service, is well-known for its superb recommender system. To recommend a user with multiple contents gained its importance as the subscription business sprouted. Services that we subscribe no longer charge users by their packets nor the number of access. Providing users with a satisfied experience in the service, thus retaining them from canceling the payment is a key to success. Spotify is great doing this by its recommender system.
Spotify recommends you songs based on collaborative filtering. To give you a simple analogy, if a person A listens to a song alpha, beta, and gamma. A person B who also listens to alpha and beta will like listening to gamma. This is how collaborative filtering works. Several other technologies like user-based and content-based recommender system are also applied.
If you want to learn more about recommender systems applied to the industry, check this link. https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe
2. Google
Organize the world’s information and make it universally accessible and useful.
This gigantic company unequivocally use machine learning all over the place. However, to pick out of several, Google has an amazing NLP (Natural Language Processing) technologies. Its technologies are applied to humans’ major communication medium: speech. A speech synthesizing machine literally creates human voice when we type in sentences. This text-to-speech model can reproduce voice of people who cannot make articulatory movements. Indeed, Google recreated a voice of a former NFL star Tim Shaw who are diagnosed with Parkinson’s disease.
I had a chance to know this amazing project from YouTube Originals series called “THE AGE OF A.I.” This documentary introduces several companies with a great ingenuity applying artificial intelligence in various fields from making prosthetic legs to space explorations. Follow the link to watch it! https://youtu.be/UwsrzCVZAb8
3. SUALAB
Pursue members’ happiness through fair rewards to hard work and pleasure of work. Contribute to the world by setting people free from machine-like work.
SUALAB uses machine vision for factory automation. In previous industrial grounds, faulty products should be selected by human eyes. However, SUALAB uses object detection and instance segmentation to find defects from a photo of a product. Object detection is one area of computer vision that specify an object inside an image. For example, when you take a photo of a street, finding objects on the street and classifying them whether they are cars or humans is the role of object detection. Instance segmentation draws a contour to specifically what pixels of the image are cars or humans. Using such technology, SUALAB’s appraoch dramatically reduces manpower waste
An engineer in SUALAB introduces how machine learning can be applied to computer vision in the following link. It really helped me understand trends of object detection and instance segmentation which are major problems that machine vision solves. https://hoya012.github.io/
To me, knowing how machine learning is applied to the world made me major in this field of study. I always feel that I can be one of those game changers. For subsequent posts, I will write about how I study machine learing by my own road-map. | https://medium.com/@yejihan.mar/how-machine-learning-shapes-the-world-2b7899008384 | ['Yeji Han'] | 2020-04-06 05:48:30.395000+00:00 | ['Machine Learning', 'Technology', 'Startup'] |
1,499 | What's New in Python 3.9 | Python’s New Path
There are two significant changes in this update, which we won’t see any immediate impact from — but we will begin to notice a slightly different evolution of Python as a language.
In short, this boils down to:
Python’s parser limitations
Smaller, but more frequent releases
LL(1) and PEG
Around 30 years ago, Guido van Rossum wrote pgen. One of the first pieces of code ever written for Python — it is still in use as Python’s parser to this day [1].
Pgen uses a variant of LL(1)-based grammar. This means our parser reads the code top-down, left-to-right, with a lookahead of just one token.
This essentially means that Python development is limited, because:
The lookahead of one token limits the expressiveness of grammar rules.
Python already contains non-LL(1) grammar, meaning the current parser uses a lot of workarounds, overcomplicating the process.
of workarounds, overcomplicating the process. Even with these workarounds, only so much is possible. The rules can be bent, but not broken.
With LL(1), particular left-recursive syntax can cause an infinite loop in the parse tree, causing stack overflow — as explained by Guido van Rossum here.
These attributes of the LL(1)-based parser limit what is possible in Python.
Python 3.9 has broken from these limitations thanks to a shiny new PEG parser, outlined in PEP 617.
Immediately, we won’t notice this. No changes taking advantage of the new parser will be made before Python 3.10. But after that, the language will have been released from it’s LL(1) shackles.
Development Cycles | https://towardsdatascience.com/python-3-9-9c2ce1332eb4 | ['James Briggs'] | 2020-10-05 21:11:46.012000+00:00 | ['Technology', 'Software Development', 'Data Science', 'Programming', 'Python'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.