title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Everything You Need to Know About Foot Cramps
The Causes The causes of foot cramps can be divided into two major groups: medical issues and muscular imbalance. Some of the medical issues can include an electrolyte abnormality (deficiency in potassium, calcium, or magnesium), a lack of circulation, side effects to medication (common with statins for cholesterol), or other nerve problems. These medical cases must be treated directly to relieve the cramps. Foot cramps, however, are more often the result of a muscular imbalance. Most people’s cramps happen in the arch flexor muscles, which reach from the heel to the tips of the toes. Unfortunately, most Americans (among many others) have spent nearly their entire lives in shoes with elevated heels and toe springs without even noticing it. A toe spring is the end of the toe box which props up your toes above the ball of your foot. A study done on the effects of heeled shoes showed just how detrimental they can be to our health. The study looked at two groups of women, one wearing completely flat shoes and the other wearing 2 inch high heel shoes. The women wearing the high heels had experienced a 13% shortening of their calf muscles. It is now understood that heeled shoes will shorten the musculature in the back of our legs to varying degrees. This opens the door to many more problems throughout the body created by a muscular imbalance. Regarding toe spring, a paper was written in 1905 by Dr. Phil Hoffman who noticed that only in industrialized cultures where people wear heels and toe springs do people develop tightness and contracture of the muscles on the front of the leg. So if you can imagine having tightness and shortening in the back of your calf muscles from the elevated heel as well as tightness and shortening in the front of the calf muscles due to toe spring, the only other layer of muscles in the foot are the small intrinsic muscles. When the foot is in the shoe position, the arch muscles become overstretched, creating a length to tension imbalance, which may result in foot cramps. In order for your arch muscles to function properly, they need to be in their proper length to tension relationship. This means that if the heel bone is pulling upward and the toe bones are pulling upward (typical shoe position), these little arch muscles are being held in too long of a position and they will no longer function appropriately. According to Rebecca Shapiro, LMT Medical Assistant from Dr. McClanahan’s clinic, “In general, muscles are more prone to cramping if they are weak, or if they’ve been held in a chronically lengthened position before being asked to flex or contract. In the case of the plantar foot muscles, cramping is common for both reasons. When the brain sends a message for these muscles to contract, the muscle fibers fire in an erratic and disorganized way which is the cause of cramping.” The Solutions As you can probably tell, the main culprits here are our shoes for the specific reasons of the elevated heel and the toe spring. So what easier way to fix this problem than to just remove them from your life? A flat sole is exactly what we’re calling out for and that’s exactly what minimalist footwear provides: a flat, zero-drop sole, no arch support, and no toe spring (kind of like a natural foot 😉 ). You can reset your foot and lower leg muscles back to their original length to tension relationships: lengthen the calf muscles, lengthen the shin muscles, and shorten the arch muscles again to restore the balance.
https://medium.com/in-fitness-and-in-health/everything-you-need-to-know-about-foot-cramps-ee71b31a6743
['Will Zolpe']
2020-12-03 19:35:50.814000+00:00
['Pain', 'Body', 'Health', 'Fitness', 'Muscles']
I’m No Longer Judging Others and This is How I Do It
How you see the world is a mirror back to yourself. I believe we all mirror our beings in others. What bothers us about someone else is a clear reflection of what’s happening in our inner world. It’s a simple but effective theory. There are two possibilities that we need to be mindful of. It’s either 1) we envy them for not having what they have, or 2) there’s a deeply rooted environment based opinion within us. Let’s go with examples. Firstly, starting with feeling envious of someone. If you see someone on social media who is living their best life traveling the world, setting up their own business, and spreading all good vibes around and it’s bugging you deep inside and you start to judge that person, there’s a clue about you in that judgment. The way they live their life is bothering you while you’re struggling at your day job and not managed to quit for years simply tells you that you have the potential to. You want to live the way they live too. You just didn’t show the courage to do the same. You didn’t take the leap yet. Your soul craves that relaxing life and asking you to do the same most likely. Apply this to the situations you judge others for being too much of something and see if it mirrors anything about yourself. They are mirroring a trait that you didn’t take the courage to embrace yet. And secondly, if you say, nope the first theory doesn’t apply to me, I really judge someone when I don’t like what they do, then I have an offering for you too. Look deep inside why that thing triggers you. Another example: A common thing we all judge in real life or social media is usually when we see a woman wearing clothes that are considered provocative or sexual. If you see yourself in that situation of judging the other person, can you bring light into why you feel the need to judge someone or control how someone is dressed up? May there be a deeply rooted opinion towards that situation within yourself? Maybe when growing up or now as an adult you’ve always been surrounded by people who judged how they look, what others wear and it became your autopilot response? Or maybe you’ve been raised in an environment that the clothes are related to sexuality? These are just simple examples from our day to day judgments. And there’s a very thin line between being envious of someone for having the traits we don’t and being envious because of having an autopilot opinion. But each time you notice that someone in real life or social media triggers a feeling within you, turn the camera to yourself, and see why that thing triggers you. We’re all doing our best in life. We all have different stories in life but we’re also very similar in nature. Our lessons may look different on the outside but the deep lesson is usually the same. The way we learn things in life is different. The way I take my lesson about loving another being may look different than the way you learn. But if it’s all about the same lesson, in the end, we’re in the same boat of being human. Or maybe the timing of my life is different than yours too. So, let’s leave the judgment at the door. Let’s not tell others what or when they should do certain things. Let’s not see others small when they make mistakes. Instead, see everyone as a teacher. Your mom is a teacher. Your lover is a teacher. The person you came across at a cafe that was kind to you is a teacher. You’re a teacher. When we see every situation and everyone as a teacher life gets easier. We don’t take things too personally. Yes, some things are personal and there are things we need to look back at ourselves but if there’s a good thing we can learn from conflicts with others, we have a win. Another way to achieve a non-judgmental personality is to look at a situation or person with empathy. When we examine someone’s story it’s important to see where they come from, their culture, upbringing, or what experience caused them to respond this way. Of course, if we’re not in the same situation with the person we put judgment on, it’s easy for us to choose the wiser option to act. That’s why we should see the conflicts mindfully from a wider angle and evaluate it without being too harsh on others.
https://medium.com/the-innovation/im-no-longer-judging-others-and-this-is-how-i-do-it-3e3a7578cd3c
['Begüm Erol']
2020-11-17 18:25:16.800000+00:00
['People', 'Life', 'Self', 'Lifestyle', 'Mindfulness']
Quick and easy model evaluation with Yellowbrick
Now and then I come across a Python package that has the potential to simplify a task that I do regularly. When this happens, I’m always excited to try it out and, if it’s awesome, share my new knowledge. A couple of months ago, I was browsing Twitter when I saw a tweet about Yellowbrick, a package for model visualization. I tried it, liked it, and now incorporate it into my machine learning workflow. In this post, I’ll show you a few examples of what it can do (and you can always go check out the documentation for yourself). Fast facts Some fast facts about Yellowbrick: Its purpose is model visualization, i.e., helping you understand visually how a given model performs on your data so you can make informed choices about whether to select that model or how to tune it. Its interface is a lot like that of scikit-learn. If you’re comfortable with the workflow of instantiating a model, fitting it to training data, and then scoring or predicting in one line of code each, then you’ll pick up Yellowbrick very quickly. Yellowbrick includes “visualizers” (a class specific to this package) based on Matplotlib for the main types of modeling applications, including regression, classification, clustering, time series modeling, etc., so there’s probably one to help with most of your everyday modeling situations. Overall opinion Overall, I enjoy using Yellowbrick because it saves me time on some routine tasks. For instance, I have my own code for visualizing feature importances or producing a color-scaled confusion matrix that I copy from project to project, but Yellowbrick lets me quickly and easily produce an attractive plot in fewer lines of code. The downside to this easy implementation, of course, is that you don’t have as much control over how the plot looks as you would if you coded it yourself. If the visualization is just for your benefit, fine; but if you need to manipulate the plot in any way, prepare to dig into the documentation. A fair trade, for sure, but just consider the end user of your plot before you begin so you don’t have to do things twice (once in Yellowbrick, once in Matplotlib/Seaborn/etc.). Speaking of doing things twice, let’s take a look at the same visualization routine in Yellowbrick v. Matplotlib. Feature importances with Yellowbrick v. Matplotlib For this little case study, I’m going to fit a Random Forest classifier to the UCI wine dataset, then use a barplot to visualize the importance of each feature for prediction. The dataset is smallish (178 rows, 13 columns), and the purpose of the classification is to predict which of three cultivars a wine contains based on various features. First, the basics: # Import basic packages import pandas as pd import numpy as np import matplotlib.pyplot as plt # In Jupyter Notebook, run this to display plots inline %matplotlib inline # Get the dataset from sklearn from sklearn.datasets import load_wine data = load_wine() # Prep features and target for use X = data.data X = pd.DataFrame(X) X.columns = [x.capitalize() for x in data.feature_names] y = data.target In the code above I grabbed the data using sklearn’s built-in load_wine() class and split it into features (X) and target (y). Note that I took the extra step of converting X to a DataFrame and giving the columns nice capitalized names. This will make my life easier when it comes time to build the plots. Let’s take a look at the Yellowbrick routine first. I’ll instantiate a RandomForestClassifier() and a FeatureImportances() visualizer, then fit the visualizer and display the plot. # Import model and visualizer from yellowbrick.model_selection import FeatureImportances from sklearn.ensemble import RandomForestClassifier # Instantiate model and visualizer model = RandomForestClassifier(n_estimators=10, random_state=1) visualizer = FeatureImportances(model) # Fit and display visualizer visualizer.fit(X, y) visualizer.show(); And this is what you get: In four lines of code (not counting the import statements), I’ve got a respectable looking feature importances plot. I can see at a glance that the proline is really important for identifying the cultivar, while malic acid and alkalinity of ash are not. A couple of tiny gripes: The colors don’t convey any real information, so if I were hand-coding this, I would keep all bars the same color. The x-axis has been relabeled to express the relative importance of each feature as a percentage of the importance of the most important feature. So proline, the most important feature, is at 100%, and alkalinity of ash is around 5%. I would rather see the feature importance values calculated by the Random Forest model, since even the most important feature might explain only a tiny fraction of the variance in the data. The Yellowbrick plot masks the absolute feature important in favor of presenting the relative importance, which we could infer that from the lengths of the bars in the plot! Now let me show you what it would take to build the exact same plot by hand in Matplotlib. I’ll start by fitting the RandomForestClassifier(): # Fit a RandomForestClassifier from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=10, random_state=1) model.fit(X, y) Note that I instantiated the model with the same number of estimators and random state as the one above, so the feature importance values should be exactly the same. Here’s the basic code I usually use when plotting feature importances: # Plot feature importances n_features = X.shape[1] plt.figure(figsize=(8,6)) plt.barh(range(n_features), model.feature_importances_, align='center') plt.yticks(np.arange(n_features), X.columns) plt.xlabel("relative importance") plt.title('Feature Importances of 13 Features Using RandomForestClassifier') plt.show(); And here’s the plot: Notice that all the bars are the same color by default and the x-axis represents the actual feature importance values, which I like. Unfortunately, the bars are not sorted from widest to narrowest, which I would prefer. For example, looking at the current plot, I’m having trouble telling if nonflavanoid phenols or magnesium is more important. Let’s see what it would take to reproduce the Yellowbrick plot exactly in Matplotlib. First of all, I have to sort the features by importance. This is tricky because model.feature_importances_ just returns an array of values with no labels, ordered just as the features are ordered in the DataFrame. To sort them, I need to associate the values with the feature names, sort, then split them back up to pass to Matplotlib. # Zip and sort feature importance labels and values # (Note that reverse=False by default, but I included it for emphasis) feat_imp_data = sorted(list(zip(X.columns, model.feature_importances_)), key=lambda datum: datum[1], reverse=False) # Unzip the values and labels widths = [x[1] for x in feat_imp_data] yticks = [x[0] for x in feat_imp_data] n_features = X.shape[1] # Build the figure plt.figure(figsize=(8,6)) plt.barh(range(n_features), widths, align='center') plt.yticks(np.arange(n_features), yticks) plt.xlabel("relative importance") plt.title('Feature Importances of 13 Features Using RandomForestClassifier') plt.show(); A quick but crucial note: see how I sorted the feature importances in ascending order? That’s because Matplotlib will plot them starting from the bottom of the plot. Take it from me, because I learned the hard way: if you want to display values in descending order (top-bottom), pass them to Matplotlib in ascending order. That’s much easier to read! Now if I really wanted to duplicate the Yellowbrick plot in Matplotlib, I would also need to supply the colors and the x-tick labels, as well as remove the horizontal grid lines. # First set up colors, ticks, labels, etc. colors = ['steelblue', 'yellowgreen', 'crimson', 'mediumvioletred', 'khaki', 'skyblue'] widths = [x[1] for x in feat_imp_data] xticks = list(np.linspace(0.00, widths[-1], 6)) + [0.25] x_tick_labels = ['0', '20', '40', '60', '80', '100', ''] yticks = [x[0] for x in feat_imp_data] n_features = len(widths) # Now build the figure plt.figure(figsize=(8,6)) plt.barh(range(n_features), widths, align='center', color=colors) plt.xticks(xticks, x_tick_labels) plt.yticks(np.arange(n_features), yticks) plt.grid(b=False, axis='y') plt.xlabel("relative importance") plt.title('Feature Importances of 13 Features Using RandomForestClassifier') plt.show(); In case you’re wondering, it took me about an hour of tinkering to recreate the Yellowbrick plot in Matplotlib. This included sorting the bars from biggest to smallest, guessing the colors and struggling to get them in the right order, resetting the x-axis ticks and labels to the 100% scale, and removing the horizontal gridlines. Moral of the story: if a Yellowbrick plot will meet your needs, then it’s a much quicker way to get there than via Matplotlib. Of course, you’re never going to beat plain vanilla Matplotlib for granularity of control. More fun with Yellowbrick There is plenty more you can do to visualize your machine learning models with Yellowbrick; be sure to check out the documentation. Here are just a few more quick examples: Example 1: A color-coded confusion matrix (using the same wine data and Random Forest model as above). # Import what we need from sklearn.model_selection import train_test_split from yellowbrick.classifier import ConfusionMatrix # Split the data for validation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) # Instantiate model and visualizer model = RandomForestClassifier(n_estimators=10, random_state=1) matrix = ConfusionMatrix(model, classes=['class_0', 'class_1', 'class_2']) # Fit, score, and display the visualizer matrix.fit(X_train, y_train) matrix.score(X_test, y_test) matrix.show(); And here’s the code I used in a recent machine learning project to build something similar myself. Note that my function takes the true and predicted values, which I would have to calculate beforehand, while Yellowbrick gets its values from the .score() method. # Define a function to visualize a confusion matrix def pretty_confusion(y_true, y_pred, model_name): '''Display normalized confusion matrix with color scale. Edit the class_names variable to include appropriate classes. Keyword arguments: y_true: ground-truth labels y_pred: predicted labels model_name: name to print in the plot title Dependencies: numpy aliased as np sklearn.metrics.confusion_matrix matplotlib.pyplot aliased as plt seaborn aliased as sns ''' # Calculate the confusion matrix matrix = confusion_matrix(y_true, y_pred) matrix = matrix.astype('float') / matrix.sum(axis=1)[:, np.newaxis] # Build the plot plt.figure(figsize=(16,7)) sns.set(font_scale=1.4) sns.heatmap(matrix, annot=True, annot_kws={'size':10}, cmap=plt.cm.Greens, linewidths=0.2) # Add labels to the plot class_names = ['Spruce/Fir', 'Lodgepole Pine', 'Ponderosa Pine', 'Cottonwood/Willow', 'Aspen', 'Douglas-fir', 'Krummholz'] tick_marks = np.arange(len(class_names)) tick_marks2 = tick_marks + 0.5 plt.xticks(tick_marks, class_names, rotation=25) plt.yticks(tick_marks2, class_names, rotation=0) plt.xlabel('Predicted label') plt.ylabel('True label') plt.title('Confusion Matrix for {}'.format(model_name)) plt.tight_layout() plt.show(); # Plot the confusion matrix pretty_confusion(y_true, y_pred, 'Random Forest Model') Example 2: A t-SNE plot to show how two classes of texts overlap. I won’t go into detail about the data and model here, but you can check out the relevant project on my GitHub. # Import needed packages from sklearn.feature_extraction.text import TfidfVectorizer from yellowbrick.text import TSNEVisualizer # Prepare the data tfidf = TfidfVectorizer() X = tfidf.fit_transform(data.text) y = data.target # Plot t-SNE tsne = TSNEVisualizer() tsne.fit(X, y) tsne.show(); I don’t even know how hard this would be to do to in Matplotlib because I’ve never tried. The result I got from Yellowbrick was enough to answer my question, and I was able to take that information and move on quickly. I hope you make time to experiment with Yellowbrick. I’ve had fun and learned about some new model visualization techniques while using it, and I bet you will, too. Cross-posted from jrkreiger.net.
https://towardsdatascience.com/quick-and-easy-model-evaluation-with-yellowbrick-295cb0752bce
['Jr Kreiger']
2020-02-25 20:04:55.149000+00:00
['Machine Learning', 'Python', 'Review', 'Tools', 'Data Visualization']
Darjan Hil, video collection
I have tried to collect all the videos which are somewhere in the internet in one article. This helps me to share the videos in one place, but also to keep an overview on all the talks me or a YAAY team member had. The internet never forgets, so let’s keep at least an overview. Meetup Talk: Labs for the European Union and public health Date: March 2019 / Language: English Conference Keynote: Visualisierung auf den Punkt bzw. Bubble gebracht Date: November 2018 / Sprache: Deutsch Meetup Talk: The power of information design and so portfolio examples Date: August 2018 / Language: English Conference Talk: EDCH Munich — every dot on the paper must have a meaning Date: März 2018 / Sprache: Deutsch Nicole Lachenmeier, Danilo Wanner: Die 7 Prinzipien von YAAY, was wir gelernt haben Date: November 2017 / Sprache: Deutsch LectureTalk: Visualize your research Date: Dezember 2016 / Sprache Deutsch Imagefilm: YAAY @ Swiss Cultural Challenge Date: November 2016 / Sprache: Deutsch Hackaton Concept: Visual Exploration of Vesalius Fabrica Date: Juli 2016 / Language: English Indre Grumbinaite: The way we work at YAAY Date: November 2015 / Language: English YAAY Manifesto: Team answers questions visually Date: Juni 2015/ Language: Visual HGK Experteninterview: Die Geschichte hinter YAAY Date: Dezember 2014 / Sprache: Deutsch Campus Talk: Visuell kommuniziert es sich leichter Date: Mai 2014 / Sprache: Deutsch Research Project: Development of a Content Independent Game Framework — Tourney Date: August 2013 / Sprache: Deutsch Research Project: Scientific Communication at the University — Blog Design Date: Juni 2013 / Sprache: Deutsch
https://medium.com/superdot_studio/darjan-hil-video-collection-ea57678a7087
['Darjan Hil']
2020-12-02 14:59:38.014000+00:00
['Design', 'Information Design', 'Talks', 'Data Visualization', 'Infographics']
How to populate an HTML table dynamically using ngFor in Angular 9
In this article I will walk you through an example demonstrating how to create an HTML table and populate it dynamically using the native ngFor directive in Angular 9. 1 — Create an empty Angular Project First, let’s start off my creating an empty Angular 9 project. To do that open a Terminal in your VScode and type the following command: > ng new LoadDataDynamically and make sure to follow the command prompts. This might take a bit to complete because Angular is going to download its required packages. Below is a link to an article that I have created that will show you how to create an empty Angular 9 project along with all the necessary requirements. 2 — Create a mock dataset that will be bound to the HTML table After the application has been successfully created, open the solution and locate the “app.component.ts” file. Inside the constructor let’s instantiate a new array of objects which will contain basic data that we will use to bind to an HTML table. data: Array<any>;constructor(){ this.data = [ { firstName: 'John', lastName: 'Doe', age: '35' }, { firstName: 'Michael', lastName: 'Smith', age: '39' }, { firstName: 'Michael', lastName: 'Jordan', age: '45' }, { firstName: 'Tanya', lastName: 'Blake', age: '47' } ]; } The code inside the “app.component.ts” file should look like the screen shot below: At this point we have an object that contains information that we can use to populate our HTML table. This object is populated at the time this class is instantiated and the constructor is doing the population for us. 3 — Create the HTML table to dynamically bind the data from the component.ts Open the “app.component.html” file and remove all of its content. By default Angular populates information/html in the file. For now let’s get rid of everything and let’s create our HTML table. Below is the HTML code that you need to add in the “app.component.html” file. <table> <thead> <tr> <th>First Name</th> <th>Last Name</th> <th>Age</th> </tr> </thead> <tbody> <tr *ngFor="let people of data"> <td>{{people.firstName}}</td> <td>{{people.lastName}}</td> <td>{{people.age}}</td> </tr> </tbody> </table> You will notice that inside the <tbody> I only have one <tr> with three sets of <td> tags. That is because we are not going to manually create the elements of the table we are going to rely on the *ngFor directive to populate the data in the array for us. *ngFor directive will loops through the array of objects that we have provided it and it automatically bind the elements for each object to the corresponding <td> based on how we have defined it in the template. In essence the *ngFor acts similar to how a for loop behaves and in this case we are leveraging it to loop through each element in the array and do something with the information. In this care we will create a new row with three cells for each object in the list. 4 — Build and run the application Now that we have completed the implementation of our code let’s build the application and run it in the browser to see how it is going to behave. To build the application run the following command in VScode terminal: > ng serve -o Once the application has completed the build process it will automatically launch in a browser. You will notice that our table has loaded successfully and that all of the data in the array that we have created on the “app.component.ts” have properly been bound to the table.
https://zeroesandones.medium.com/how-to-populate-an-html-table-dynamically-using-ngfor-in-angular-9-26d4d9f2023
[]
2020-07-30 13:02:00.802000+00:00
['Angular', 'JavaScript', 'Vscode', 'Typescript', 'HTML']
Computer Vision for Garbage Detection
In recent time the use of Machine learning has surged with more computing power ( GPUs) being available for research and training. Deep Learning is now helping us venture into problem areas pertaining to major environmental and ecological impacts. One such area of concern is garbage identification and classification. When garbage is identifiable it can be recycled efficiently thus helping the environment and climate change in the long run. Robot Ramudroid aims to meet this challenge of identifying and picking up recyclable litter from roadsides, alleys, lanes, sidewalks and other urban outdoor places using clean solar energy. Some experiments at automating garbage classification for the project Ramudorid are summarised as follows - HAAR cascades and HOG + SVM The first and simplest approach to be used was a sliding window approach where pixels outside of the window are cropped and the smaller image is then sent out to classifier. The downsides of this approach were that it was useful only when detecting a single object class with a fixed aspect ratio. Regions with CNN features ( R-CNN ) — GoogLeNet / Inception/ VGG Network The concept of sliding window bounding boxes was tossed out in favour of a model that can propose locations of bounding boxes that contain the object. While AlexNet achieved 84.7 % accuracy and subsequent deeper models such as GoogleNet , Inception, VGG further improved the performance. The method was unsuitable for garbage classification primarily due to their weight and processing time, presumably due to large categories. For example, GoogLeNet was trained on 1.2 million images for 1000-classes object recognition Detecting objects, not specific Litter, garbage or trash RESNET This achieved 95% accuracy on single class detection But this model too did not perform very well on a group of objects Pre-trained RESNET model: https://www.kaggle.com/keras/resnet50 you only look once (YOLO) YOLO ( v2 and v3) typically employs a single neural network to perform predictions of bounding boxes and class probability in one evaluation ( look only once ), making it faster. This approach divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. Requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. We achieved decent real-time performance, however more suited to autonomous self-driving cars the model was not able to detect objects very well from the low ground level of camera. IBM Watson Vision Cloud-based computer vision services such as IBM Watson vision perform remarkably well-generating tags like litter in spite of no prior custom tagging or training by me, as shown by the screenshot below. Google Cloud -vision However, on the flip side the robot cannot entirely depend on payment based cloud-services due to the high volume of data With the unsatisfactory findings from the above approaches, I started with self-trained models with a custom dataset collected from the neighbourhood which includes litter of various types like — plastic, metal cans and caps , wrappers, cardboard, paper and glass . SVM using SIFT features + Adam gradient descent optimization SVM used a radial basis function (Gaussian) kernel Emulation of AlexNet using multiclass SVM and convolutional neural network Accuray ranges <40% Approach 2: Google’s Teachable machine Teachable machines provide a way to make train models based on classifying objects into categories. Positives False positives Approach 3 : Keras + Tensorflow based on Mobilenet v2 light weight models References : Project Ramudroid on Github https://in.mathworks.com/content/dam/mathworks/ebook/gated/80879v00_Deep_Learning_ebook https://www.kaggle.comgarbage-segrigation-on-pytorch-95-accuracy Kaggle : Garbage Segregation Github Computer Vision: https://github.com/altanai/computervision Google Vision — https://cloud.google.com/vision and https://lens.google.com/howlensworks/ Teachable machines — https://teachablemachine.withgoogle.com/train/image/1OIp5sHcR6cAlz5mKSdROvpFPl7qoRyz9 …. Originally published at http://programmingaltanai.wordpress.com on October 4, 2020.
https://medium.com/ramudroid/computer-vision-for-garbage-detection-136029142b3c
[]
2020-10-12 12:36:29.344000+00:00
['Cnn', 'Deep Learning', 'Computer Vision', 'Garbage Classification', 'Teachable Machine']
#ClimateFinanceNG: Bridging the Investment Gap For Nationally Determined Contributions (#NDCs) — By Dr. Jubril Adeojo
What are the inhibiting barriers and apparent disconnect between the purported available or required finance and the actual finance invested in sustainable development? I believe part of the supposed barriers to funding, range from the cost of some green technology assets, to the perception that investment in climate actions is risky, the corporate governance structure of companies. Time period in deployment of some Renewables, Short term ROI focus of investors and the current credit line structure of banks. All these most especially my last point to a very large extent are the disconnect in SDGs. Onewattsolar, an investee Company of SMefunds. They have a tech solution where business and individuals pay for the energy they consume only. Energy users do not need to won the solar assset, so they pay for the energy they consume. Do you think a far more coordinated effort is required to encourage investments in long-term and sustainable landscape-scale initiatives? Certainly, a coordinated effort needs to be encouraged. Financial institution, banks need to mainstream climate change into their lines of credit. We need to understand that right now topping the chart of all types of risk is the climate risk. How do you function as a bank when flood affects their investments or how do we investment in food production when drought is spreading or flood destroys crops. The issue is, banks are not looking at the climate risk as investment strategy. Currently, we are advising AFDB on new guidelines to mainstream climate change into lines of credit released to African banks. This will support Financial Institutions to identify and adopt innovative practice on sustainable financing measures to help sustainable businesses. We are facing uncharted territory that requires taking unprecedented action to recalibrate globally towards a low-carbon economy. How do we unlock private finance as a solution to achieving change? Indeed scary times. To unlock private financing, not just unlocking but unlocking at massive scale. A. Increase transparency,- investors should readily know their exposure to carbon risk as well as their carbon impacts. The Govt at all levels should make it mandatory for companies to disclose their carbon risks and impacts. B. Redirect savings to investing in sustainable solution (green bonds): to achieve a low carbon economy fast enough we all need to get involved. Considering the low interest rate we have currently in Nigeria, how savings should be channelled to investing in sustainable solutions that will give you good return and also move us closer to a low-carbon economy. Government participation at all levels- all levels should invest in low carbon infrastructures. For instance local govt and state govts can issue a green bond in order to develop low-carbon infrastructurea to accelerate low carbon economy. We shouldn’t assume that people understand climate change. Continuous discussions like this so that people can understand climate change and its effect. Govt could introduce a label system that would indicate the carbon risk and impact. Heavy Sanctions: Governments should impose heavy sanctions on companies contributing to the environmental degradation and emissions. There should also be in place high carbon taxes. Reward those complying. The world is in a transition phase propelled by ensuring global average temperatures remain below 2°C above pre-industrial levels amidst a rising population. Can sector investment help to achieve a sustainable future? Yes. Investors need to use their understanding of the climate change we face now as pragmatic guide to analyse the manner in which they are investing. I believe using the principles of sustainability to guide investment makes business sense. It is a known fact, scientifically proven that if we continue to burn fossil fuels which aggravates climate change, economic activities will be utterly interrupted. Current floods and extreme weather should serve as warning of things to come. So investors should blink an eye before investing in the sector. This is a global challenge; I would say it is achievable but the way in which financial markets, Investors respond will be a great enabler to achieving the set target There is a long-standing awareness dat funding for environmental & climate efforts is scarce. However, there is a growing discourse claiming the availability of trillions of dollars to finance d global envtal agenda? What’re ur thoughts on this? It was Bill gates who said that to combat and defeat climate change we have to deploy green technologies are massive scale. Innovative ways of deploying capital at massive scale is equally paramount to achieve the set target.Saying there are trillions available and its sitting in the bank won’t move us nowhere. That’s just what I think. The trillion should be deployed to achieve the said global environmental agenda via innovativ financing models What is your recommendation for the government in bridging the investment gap for Nationally Determined Contributions (#NDCs) There should be heavy sanctions and high carbon taxes for business who emit. B. Do more of implementing mitigation measures that will promote low carbon economy as well as sustainable and high economic growth. Continue to enhance national capacity to adapt to climate change. D Invest heavily in climate change related science, technology and R&D that will enable the country participate in ground breaking scientific and technological innovations Significantly increase public consciousness/awareness on climate issues. F Involve private sector participation in addressing the challenges of climate change with possible tax incentive. Strengthen the national institutions to establish a highly functional framework for climate change governance. Enforcement of environmental laws is the vaccination needed now to protect us from dire effects of climate change on our lives and business. SDG13Climate action is the most strategic because the success of other goals depend on it. This is a tweet-chat series on #ClimateWednesday — #ClimateFinanceNG
https://medium.com/climatewed/climatefinanceng-bridging-the-investment-gap-for-nationally-determined-contributions-ndcs-6221f56932f1
['Iccdi Africa']
2020-10-18 16:20:08.252000+00:00
['Investment', 'Ndc', 'Finance', 'Climate Change', 'Renewable Energy']
Understanding Important Differences Between NodeJS And AngularJS
With more and more businesses going digital, app development companies have started creating more unique and robust applications & websites for their clients. For developing interactive web applications to boost the user-experience, everyone prefers the two most popular frameworks, namely — NodeJS and AngularJs. These two JavaScript technologies are similar yet different from each other. They have varied functionalities and performance levels when it comes to creating applications. Which to choose is a difficult task. Therefore, in this blog, we will understand the ins and outs of both frameworks. What is a JavaScript Framework? When any developer thinks of creating a platform for its client, he chooses the best JavaScript-based website or application layout, which can help achieve the expected desire. The JavaScript framework is nothing but a set of JavaScript code libraries that are pre-written to use in various features’ routine development process. It helps automate the repetitive process of using features like templating and two-way binding. The JavaScript frameworks are used to design the website as it makes them more responsive. Some of the most popular JavaScript frameworks are — AngularJs, React, VueJs, and NodeJs. What is NodeJS? NodeJS is one of the most popular open-source server frameworks compatible with many different platforms, like Mac OS X, Windows, and Linux. It is mostly used in creating networking applications. Besides this, the applications that need to anticipate a lot of scalabilities are developed using NodeJS. Features of NodeJS 1. Scalability NodeJS applications can be scaled in both a horizontal and vertical manner to help improvise the app’s performance. 2. Server Development With the amazing in-built APIs, the NodeJS developers can create many different servers like TCP server, DNS server, and HTTP server. 3. Enhanced Performance The NodeJS developers can efficiently perform non-blocking performances and enhance their web applications’ performance using this framework. 4. Open-source NodeJS is entirely open-source and free to download and use. 5. Unit Testing With NodeJS, the developers get a unit testing named Jasmine. It enables them to test the code easily. Benefits of NodeJS Develop real-time web apps Higher performance Server-side coding Communicational operations Quick and Scalable NodeJs Architecture When we talk about NodeJs, it is considered one of the finest server-side platforms that enable developers to build easy and scalable network apps. It uses a non-blocking I/O and event-driven model that makes real-time apps that are lightweight and efficient. 1. Highly Scalable Using a single-threaded model with event looping, NodeJS helps the server respond in a non-blocking manner and makes the server scalable. With the help of NodeJS, the developers can work on the old programs and provide services to it, which can make it a much larger number of requests. 2. Event-driven and Asynchronous In the NodeJs library, all the APIs are asynchronous. This means that the NodeJS server doesn’t wait for an API to return the data, it directly moves to the next API and notifies the event mechanism. 3. No Buffering By using non-blocking I/O calls, NodeJs operates a single thread and allows it to support a massive number of concurrent connections without switching the context of the thread. Here, the design of sharing a thread observes a pattern that can help in building an accompanying app. Popular Applications Created Using NodeJS NodeJS is used to create applications in various fields like Law, Computers, Electronics, Lifestyle, and more. There are more than 179k websites in the market that are created using NodeJS. Some of the most popular apps created using NodeJS development services are — PayPal, Linked In, Netflix, eBay, Yahoo, and GoDaddy. What is AngularJS? AngularJS, the JavaScript framework, is an open-source platform maintained by Google. It enables the developers to create web applications that focus on the client-side. The AngularJS developers use HTML as a template language, and syntax is done to express the application’s components. Features of AngularJS 1. MVW Architecture In Angular, we have MVW (Model-View-Whatever) architecture on the top of the MVC framework. Here the view is basically manipulated, and it remodels DOM to update the data. 2. POJO Model POJO model stands for Plain Old JavaScript Objects model. It is used by the developers to analyze the data flow of the development process and helps to create loops. Besides this, POJO also enables clear code to create a customer-building app. 3. MVC Framework When we talk about the AngularJS framework, it comprises three paradigms — Model, View, and Controllers. The MVC model helps the developers to combine the logic without the use of code automatically. 4. Easy to Use AngularJS is considered one of the easy to use frameworks. It also helps to decouple the DOM manipulation. Benefits of AngularJS HTML debugging Huge developing community Quick scaffold Perfect documentation Comprehensive solution AngularJS Architecture The AngularJS architecture follows the MVW model, which makes it capable of supporting different patterns like the Model-View-View or Model-View-Controller model. Let’s understand the Model-View-Whatever (MVW) architecture in detail. 1. Model Model is the lowest level of responsibility where the data is maintained and managed. This level responds to the request from the view and gets the instructions from the controller for any kind of updates. 2. View The view is responsible for showcasing various kinds of data to the users. 3. Controller The controller interacts between the view and the model. It responds to the input from the user and performs interaction. Popular Applications Created Using AngularJS AngularJs enables the developers to create powerful applications by keeping the features in mind. There are around 383k web applications in the market created by AngularJs. Some of these are — Upwork, Lego, Weather, IBM, DoubleClick, JetBlue, Freelancer, and more. Compare — NodeJS Vs. AngularJS Both NodeJs and AngularJS are the most popularly used JavaScript technologies in the world. These technologies have their own benefits and help the developers in their own ways to create a wonderful application. NodeJS is a well-known cross-platform runtime environment, while on the other hand, AngularJS is a JavaScript framework. Here let us understand the important differences between these two technologies. 1. Installation With NodeJS, the developers can write applications in JavaScript. But the app needs an environment like macOS, Linux, and Windows to run on. Therefore, app developers install NodeJs to create a development environment. While on the other hand, there is no need to install AngularJS. The developers only need to embed the files in the codebase. 2. Web Framework Another point of comparison is a web framework. AngularJS is one of the most popular web frameworks used to automate the app development task and creating different applications. While NodeJS is not used as a web framework, they select NodeJs based frameworks like Express.js, Sails.js, and Meteor.js. 3. Working with Data When it comes to implementing the MVC architecture pattern, AngularJS supports two-way data binding. AngularJs is a technology that does not offer any feature to write the query in the database. While, if we talk about NodeJS, it allows the developers to create non-relational database queries. 4. Important Features NodeJS and AngularJS are the frameworks that comprise the best features that enable the developers to create top-rated applications. One such common feature is the support of MVC architecture. When we separately list out their differences, we can get a clear idea of which framework can be used in creating which type of application. AngularJs allows the developers to use HTML as a language for creating templates. It enables creating single-page & dynamic web apps with features like filters, templates, data binding, scope, deep linking, dependency injection, and directives. On the other hand, NodeJS is a framework that comes with an array of features for the developers to create a server-side and networking app. With NodeJs, the developers can simplify the development process of single-page websites, similar I/O intensive websites, and video streaming sites. 5. Programming Language & Support When we talk about NodeJs and AngularJs, they both support a lot of programming languages besides JavaScript. NodeJS supports TypeScript, CoffeeScript, and Ruby. While AngularJs supports TypeScript, CoffeeScript, and Dart. Besides this, NodeJs also supports functional, event-driven, sub/pub programming paradigms, concurrency-oriented, and object-oriented programming languages. While AngularJS supports event-driven, functional, and object-oriented programming languages. 6. Core Architecture NodeJs is a cross-platform environment that is based on Google’s V8 JavaScript engine. It is a framework written using various languages like C, C++, JavaScript. On the other hand, AngularJs is a product of Google. It is developed as a web app development framework that follows syntax rules. This framework is written using JavaScript. To Wrap Up With — AngularJs is a popular client-side framework & NodeJS is a well-known runtime environment! AngularJs and NodeJs are both open-source platforms. AngularJS is used to create single-page client-side web apps. While NodeJS is used to develop server-side apps. Both these can be combined with isomorphic web apps, but they have their own architectures. One can not say that one of these is better than the other one as both these frameworks focus on creating different applications. Therefore, it’s all about choosing the right technology that suits your application type and hiring the best AngularJS or NodeJS development company.
https://medium.com/weekly-webtips/understanding-important-differences-between-nodejs-and-angularjs-e222d4b9fa26
['Nelly Nelson']
2020-11-25 06:03:11.588000+00:00
['Development', 'Angular', 'Node', 'Nodejs', 'Angularjs']
20 Core Data Science Concepts for Beginners
20 Core Data Science Concepts for Beginners Review these foundational concepts for a job interview preparation or to refresh your understanding of the basics Photo by Debby Hudson on Unsplash With so much to learn and so many advancements to follow in the field of data science, there are a core set of foundational concepts that remain essential. Twenty of these ideas are highlighted here that are key to review when preparing for a job interview or just to refresh your appreciation of the basics. 1. Dataset Just as the name implies, data science is a branch of science that applies the scientific method to data with the goal of studying the relationships between different features and drawing out meaningful conclusions based on these relationships. Data is, therefore, the key component in data science. A dataset is a particular instance of data that is used for analysis or model building at any given time. A dataset comes in different flavors such as numerical data, categorical data, text data, image data, voice data, and video data. A dataset could be static (not changing) or dynamic (changes with time, for example, stock prices). Moreover, a dataset could depend on space as well. For example, temperature data in the United States would differ significantly from temperature data in Africa. For beginning data science projects, the most popular type of dataset is a dataset containing numerical data that is typically stored in a comma-separated values (CSV) file format. 2. Data Wrangling Data wrangling is the process of converting data from its raw form to a tidy form ready for analysis. Data wrangling is an important step in data preprocessing and includes several processes like data importing, data cleaning, data structuring, string processing, HTML parsing, handling dates and times, handling missing data, and text mining. Figure 1: Data wrangling process. Image by Benjamin O. Tayo The process of data wrangling is a critical step for any data scientist. Very rarely is data easily accessible in a data science project for analysis. It is more likely for the data to be in a file, a database, or extracted from documents such as web pages, tweets, or PDFs. Knowing how to wrangle and clean data will enable you to derive critical insights from your data that would otherwise be hidden. An example of data wrangling using the college towns dataset can be found here: Tutorial on Data Wrangling 3. Data Visualization Data Visualization is one of the most important branches of data science. It is one of the main tools used to analyze and study relationships between different variables. Data visualization (e.g., scatter plots, line graphs, bar plots, histograms, qqplots, smooth densities, boxplots, pair plots, heat maps, etc.) can be used for descriptive analytics. Data visualization is also used in machine learning for data preprocessing and analysis, feature selection, model building, model testing, and model evaluation. When preparing a data visualization, keep in mind that data visualization is more of an Art than Science. To produce a good visualization, you need to put several pieces of code together for an excellent end result. A tutorial on data visualization is found here: Tutorial on data visualization using weather dataset Figure 2: Weather data visualization example. Image by Benjamin O. Tayo 4. Outliers An outlier is a data point that is very different from the rest of the dataset. Outliers are often just bad data, e.g., due to a malfunctioned sensor; contaminated experiments; or human error in recording data. Sometimes, outliers could indicate something real such as a malfunction in a system. Outliers are very common and are expected in large datasets. One common way to detect outliers in a dataset is by using a box plot. Figure 3 shows a simple regression model for a dataset containing lots of outliers. Outliers can significantly degrade the predictive power of a machine learning model. A common way to deal with outliers is to simply omit the data points. However, removing real data outliers can be too optimistic, leading to non-realistic models. Advanced methods for dealing with outliers include the RANSAC method. Figure 3: Simple regression model using a dataset with outliers. Image by Benjamin O. Tayo 5. Data Imputation Most datasets contain missing values. The easiest way to deal with missing data is simply to throw away the data point. However, the removal of samples or dropping of entire feature columns is simply not feasible because we might lose too much valuable data. In this case, we can use different interpolation techniques to estimate the missing values from the other training samples in our dataset. One of the most common interpolation techniques is mean imputation, where we simply replace the missing value with the mean value of the entire feature column. Other options for imputing missing values are median or most frequent (mode), where the latter replaces the missing values with the most frequent values. Whatever imputation method you employ in your model, you have to keep in mind that imputation is only an approximation, and hence can produce an error in the final model. If the data supplied was already preprocessed, you would have to find out how missing values were considered. What percentage of the original data was discarded? What imputation method was used to estimate missing values? 6. Data Scaling Scaling your features will help improve the quality and predictive power of your model. For example, suppose you would like to build a model to predict a target variable creditworthiness based on predictor variables such as income and credit score. Because credit scores range from 0 to 850 while annual income could range from $25,000 to $500,000, without scaling your features, the model will be biased towards the income feature. This means the weight factor associated with the income parameter will be very small, which will cause the predictive model to be predicting creditworthiness based only on the income parameter. In order to bring features to the same scale, we could decide to use either normalization or standardization of features. Most often, we assume data is normally distributed and default towards standardization, but that is not always the case. It is important that before deciding whether to use either standardization or normalization, you first take a look at how your features are statistically distributed. If the feature tends to be uniformly distributed, then we may use normalization ( MinMaxScaler). If the feature is approximately Gaussian, then we can use standardization ( StandardScaler). Again, note that whether you employ normalization or standardization, these are also approximative methods and are bound to contribute to the overall error of the model. 7. Principal Component Analysis (PCA) Large datasets with hundreds or thousands of features often lead to redundancy especially when features are correlated with each other. Training a model on a high-dimensional dataset having too many features can sometimes lead to overfitting (the model captures both real and random effects). In addition, an overly complex model having too many features can be hard to interpret. One way to solve the problem of redundancy is via feature selection and dimensionality reduction techniques such as PCA. Principal Component Analysis (PCA) is a statistical method that is used for feature extraction. PCA is used for high-dimensional and correlated data. The basic idea of PCA is to transform the original space of features into the space of the principal component. A PCA transformation achieves the following: a) Reduce the number of features to be used in the final model by focusing only on the components accounting for the majority of the variance in the dataset. b) Removes the correlation between features. An implementation of PCA can be found at this link: PCA Using Iris Dataset 8. Linear Discriminant Analysis (LDA) PCA and LDA are two data preprocessing linear transformation techniques that are often used for dimensionality reduction to select relevant features that can be used in the final machine learning algorithm. PCA is an unsupervised algorithm that is used for feature extraction in high-dimensional and correlated data. PCA achieves dimensionality reduction by transforming features into orthogonal component axes of maximum variance in a dataset. The goal of LDA is to find the feature subspace that optimizes class separability and reduce dimensionality (see figure below). Hence, LDA is a supervised algorithm. An in-depth description of PCA and LDA can be found in this book: Python Machine Learning by Sebastian Raschka, Chapter 5. An implementation of LDA can be found at this link: LDA Using Iris Dataset 9. Data Partitioning In machine learning, the dataset is often partitioned into training and testing sets. The model is trained on the training dataset and then tested on the testing dataset. The testing dataset thus acts as the unseen dataset, which can be used to estimate a generalization error (the error expected when the model is applied to a real-world dataset after the model has been deployed). In scikit-learn, the train/test split estimator can be used to split the dataset as follows: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3) Here, X is the features matrix, and y is the target variable. In this case, the testing dataset is set to 30%. 10. Supervised Learning These are machine learning algorithms that perform learning by studying the relationship between the feature variables and the known target variable. Supervised learning has two subcategories: a) Continuous Target Variables Algorithms for predicting continuous target variables include Linear Regression, KNeighbors regression (KNR), and Support Vector Regression (SVR). A tutorial on Linear and KNeighbors Regression is found here: Tutorial on Linear and KNeighbors Regression b) Discrete Target Variables Algorithms for predicting discrete target variables include: Perceptron classifier Logistic Regression classifier Support Vector Machines (SVM) Decision tree classifier K-nearest classifier Naive Bayes classifier 11. Unsupervised Learning In unsupervised learning, we are dealing with unlabeled data or data of unknown structure. Using unsupervised learning techniques, we are able to explore the structure of our data to extract meaningful information without the guidance of a known outcome variable or reward function. K-means clustering is an example of an unsupervised learning algorithm. 12. Reinforcement Learning In reinforcement learning, the goal is to develop a system (agent) that improves its performance based on interactions with the environment. Since the information about the current state of the environment typically also includes a so-called reward signal, we can think of reinforcement learning as a field related to supervised learning. However, in reinforcement learning, this feedback is not the correct ground truth label or value but a measure of how well the action was measured by a reward function. Through the interaction with the environment, an agent can then use reinforcement learning to learn a series of actions that maximize this reward. 13. Model Parameters and Hyperparameters In a machine learning model, there are two types of parameters: a) Model Parameters: These are the parameters in the model that must be determined using the training data set. These are the fitted parameters. For example, suppose we have a model such as house price = a + b*(age) + c*(size), to estimate the cost of houses based on the age of the house and its size (square foot), then a, b, and c will be our model or fitted parameters. b) Hyperparameters: These are adjustable parameters that must be tuned to obtain a model with optimal performance. An example of a hyperparameter is shown here: KNeighborsClassifier(n_neighbors = 5, p = 2, metric = 'minkowski') It is important that during training, the hyperparameters be tuned to obtain the model with the best performance (with the best-fitted parameters). A tutorial on model parameters and hyperparameters is found here: Tutorial on model parameters and hyperparameters in machine learning 14. Cross-validation Cross-validation is a method of evaluating a machine learning model’s performance across random samples of the dataset. This assures that any biases in the dataset are captured. Cross-validation can help us to obtain reliable estimates of the model’s generalization error, that is, how well the model performs on unseen data. In k-fold cross-validation, the dataset is randomly partitioned into training and testing sets. The model is trained on the training set and evaluated on the testing set. The process is repeated k-times. The average training and testing scores are then calculated by averaging over the k-folds. Here is the k-fold cross-validation pseudocode: Figure 4. k-fold cross-validation pseudocode. Image by Benjamin O. Tayo An implementation of cross-validation is found here: Hands-on cross-validation tutorial 15. Bias-variance Tradeoff In statistics and machine learning, the bias-variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples and vice versa. The bias-variance dilemma or problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set: The bias is an error from erroneous assumptions in the learning algorithm. High bias ( overly simple ) can cause an algorithm to miss the relevant relations between features and target outputs ( underfitting ). ) can cause an algorithm to miss the relevant relations between features and target outputs ( ). The variance is an error from sensitivity to small fluctuations in the training set. High variance (overly complex) can cause an algorithm to model the random noise in the training data rather than the intended outputs (overfitting). It is important to find the right balance between model simplicity and complexity. A tutorial on bias-variance tradeoff can be found here: Tutorial on bias-variance tradeoff Figure 5. Illustration of bias-variance tradeoff. Image by Benjamin O. Tayo 16. Evaluation Metrics In machine learning (predictive analytics), there are several metrics that can be used for model evaluation. For example, a supervised learning (continuous target) model can be evaluated using metrics such as the R2 score, mean square error (MSE), or mean absolute error (MAE). Furthermore, a supervised learning (discrete target) model, also referred to as a classification model, can be evaluated using metrics such as accuracy, precision, recall, f1 score, and the area under ROC curve (AUC). 17. Uncertainty Quantification It is important to build machine learning models that will yield unbiased estimates of uncertainties in calculated outcomes. Due to the inherent randomness in the dataset and model, evaluation parameters such as the R2 score are random variables, and thus it is important to estimate the degree of uncertainty in the model. For an example of uncertainty quantification, see this article: Random Error Quantification in Machine Learning Figure 6. Illustration of fluctuations in R2 score. Image by Benjamin O. Tayo 18. Math Concepts a) Basic Calculus: Most machine learning models are built with a dataset having several features or predictors. Hence, familiarity with multivariable calculus is extremely important for building a machine learning model. Here are the topics you need to be familiar with: Functions of several variables; Derivatives and gradients; Step function, Sigmoid function, Logit function, ReLU (Rectified Linear Unit) function; Cost function; Plotting of functions; Minimum and Maximum values of a function b) Basic Linear Algebra: Linear algebra is the most important math skill in machine learning. A data set is represented as a matrix. Linear algebra is used in data preprocessing, data transformation, dimensionality reduction, and model evaluation. Here are the topics you need to be familiar with: Vectors; Norm of a vector; Matrices; Transpose of a matrix; The inverse of a matrix; The determinant of a matrix; Trace of a Matrix; Dot product; Eigenvalues; Eigenvectors c) Optimization Methods: Most machine learning algorithms perform predictive modeling by minimizing an objective function, thereby learning the weights that must be applied to the testing data in order to obtain the predicted labels. Here are the topics you need to be familiar with: Cost function/Objective function; Likelihood function; Error function; Gradient Descent Algorithm and its variants (e.g. Stochastic Gradient Descent Algorithm) 19. Statistics and Probability Concepts Statistics and Probability are used for visualization of features, data preprocessing, feature transformation, data imputation, dimensionality reduction, feature engineering, model evaluation, etc. Here are the topics you need to be familiar with: Mean, Median, Mode, Standard deviation/variance, Correlation coefficient and the covariance matrix, Probability distributions (Binomial, Poisson, Normal), p-value, Baye’s Theorem (Precision, Recall, Positive Predictive Value, Negative Predictive Value, Confusion Matrix, ROC Curve), Central Limit Theorem, R_2 score, Mean Square Error (MSE), A/B Testing, Monte Carlo Simulation Here are some educational resources on Central Limit Theorem and Bayes Theorem: Illustration of Central Limit Theorem Using Monte-Carlo Simulation Bayes Theorem Explained Using Heights Dataset 20. Productivity Tools A typical data analysis project may involve several parts, each including several data files and different scripts with code. Keeping all these organized can be challenging. Productivity tools help you to keep projects organized and to maintain a record of your completed projects. Some essential productivity tools for practicing data scientists include tools such as Unix/Linux, git and GitHub, RStudio, and Jupyter Notebook. Find out more about productivity tools here: Productivity Tools in Machine Learning Originally published at https://www.kdnuggets.com.
https://medium.com/towards-artificial-intelligence/20-core-data-science-concepts-for-beginners-f755c96662b8
['Benjamin Obi Tayo Ph.D.']
2020-12-28 16:18:47.827000+00:00
['Machine Learning', 'Python', 'Education', 'Data Science', 'Data Visualization']
Karen’s Weekly Technology Hits Review
Karen’s Weekly Technology Hits Review Writers from Medium who might like to write for Technology Hits. Photo by Alex Knight on Unsplash I went hunting this week for some exciting new blood for Dr Mehmet Yildiz’s latest publication. I found some interesting writers with under 1k followers and whose applause was inexplicably low. Hmmm, I thought they might like some attention from the ILLUMINATION, ILLUMINATION-Curated and the Technology Hits readers and writers for their stories. The philosophy section of Technology Hits was looking a tad sparse so I looked up #technology #philosophy on Medium. The results are reviewed below in the first three stories. If any of the writers featured this week would like to write for Technology Hits here’s the invitation. The first exciting writer I found was Yogesh Malik. I giggled a few times while reading it. Especially the existential questions. What’s the meaning of life? Have you asked Siri? It must have been real fun. From time immemorial man has been asking this question and machine should be able to answer this to our satisfaction. Kevin Ann didn’t find much love for this post, dated Oct 15, 2019. One of the advantages of writing for Technology Hits is that if he joins as a writer, he can submit one previously published story per day to the new publication. Thus breathing new life and bringing more attention to his piece. All he needs to do is remove it from the current publication and submit to TH. Of course, I understand if you feel uncomfortable doing this, Kevin. Chances are though that your work will receive the attention it deserves from huge new audiences. Or might we tempt you to write something new for TH and get in at the start of this exciting new publication? Dr Yildiz is easy-going where writing for both his and other publications are concerned. Science Fiction explores many consequences of Artificial Intelligence, mind uploading, and consciousness transfer, but tends to relegate the scientific or technological mechanisms to some vague explanation or neglects it altogether. Futurists and Transhumanists eagerly await these ideas to become science fact so that that they may experience new vistas of consciousness or immortality, but they may ignore the difficulty of the groundbreaking science and technology required. She studied to be a lawyer but failed the bar and ended up working in the blockchain industry. She offers career advice. I hope Margherita Amici is still writing on Medium and will consider penning something for Technology Hits. Overnight, I was required to gain specific skills in regulatory and legal issues for the blockchain industry. I had to do a great work of comparison with jurisdiction that had somehow dealt with this new phenomenon and to become confident with technological dynamics that I’ve never faced. Orhan G. Yalçın caught my attention with artificial intelligence. I’m pretty keen to get an AI Cinderella to do all the tedious chores for me, aren’t you? Once you start consuming machine learning content such as books, articles, video courses, and blog posts, you will often see the terms like artificial intelligence, machine learning, deep learning, big data, and data science being used interchangeably. These terms represent several closely related areas within the field of artificial intelligence. Nancy Driver already writes super articles for ILLUMINATION, however, I found this one from Jul 22, 2019. Nancy, if you’d like to join Technology Hits, use this link to sign up. Then remove this story from ILLUMINATION and resubmit to TH. Vast new audiences are waiting to read your previously published and new technology-related work. Many people consider the timeline of home video entertainment to be VHS and then DVD, but in reality, both formats were available to buy by the late 1970s. The LaserDisc system was essentially the Blu-Ray of the VHS era. If you’d like more information about Technology Hits here’s the founder’s introduction. Thank you for reading.
https://medium.com/technology-hits/karens-weekly-technology-hits-review-325f4bd2db4c
['Karen Madej']
2020-12-21 17:01:08.545000+00:00
['Philosophy', 'Technology', 'Career Advice', 'Future Technology', 'AI']
Measuring Visual Complexity with Pixel Approximate Entropy
When designing visualizations, making charts that are straightforward to read and interpret is usually desirable. This becomes especially important in settings where users may have to make fast judgements based on the visualizations, such as in emergency rooms or on financial trading floors. In our recent paper, we introduce a new method for quantifying visual complexity, Pixel Approximate Entropy, that can be used to develop better visualizations in these types of settings. Examples of visually simple and complex charts. For an example of visual complexity, look at the chart on right. Intuitively, it is more difficult to read than the chart on the left. To improve readability, a visualization program could enhance different important aspects of the data to make it easier to read. However, current visualization programs have no way to identify which charts are difficult to read. Pixel Approximate Entropy solves this problem by providing a “visual complexity score” that can be used to identify difficult charts. This collaborative research was between Gabriel Ryan and Eugene Wu of the WuLab, and Abby Mosca and Remco Chang at Tufts. We will present this work at the IEEE VIS 2018 conference in Berlin, Germany during the week of Oct 21, 2018 — Oct 26, 2018. Come by and say hi if you will be attending! Pixel Approximate Entropy scores for charts. What is Pixel Approximate Entropy? Pixel Approximate Entropy adapts Approximate Entropy for use as a quantitative visualization complexity measure. Approximate Entropy was originally developed for low dimensional systems such as time series [1]. Approximate Entropy works by running a sliding window over the data and then comparing the two furthest points between each window and counting how many are under a given threshold. The following image shows a snapshot of the comparison of two windows in the Approximate Entropy calculation. The two windows are lined up and the distance between each pair of points is calculated. The maximum distance between a pair of points (the leftmost points in this example) is taken as the overall distance score for the two points, and added to the count of close windows if it is under the threshold. Windowing comparison process for Approximate Entropy. Intuitively, more random or “noisy” data will have more windows that are under the distance threshold as window size increases, resulting in a higher Approximate Entropy. In Pixel Approximate Entropy, we first scale the data so that each point represents a pixel, then calculate the Approximate Entropy score of the scaled data. This ensures the resulting complexity score reflects the chart the user sees, and not the underlying data. How do we know it works? To determine whether or not Pixel Approximate Entropy is a useful visual complexity measure, we conducted a series of user studies to see how well it predicted user performance on visual tasks. Specifically, we used two visual tasks: Chart Matching: Identifying which of two charts was identical to a previously seen chart Chart Classification: Identifying what type of shape a chart was showing. Screenshots of Chart Classification task. Our studies found a clear correlation between the Pixel Approximate Entropy of the charts and user performance on every task, showing that Pixel Approximate Entropy works as a visual complexity measure. Correlation between entropy and performance on Chart Classification task. Both user accuracy and confidence decrease to minimal values as Pixel Approximate Entropy increases! What are some use cases? We foresee several potential uses for Pixel Approximate Entropy, including highlighting changes in visualizations, approximate visualizations of large datasets, parameter selection for visualization, and guided chart simplification techniques. If you want to try out Pixel Approximate Entropy, its available in both Python and Javascript. Check out the project at https://github.com/cudbg/pae, or install the package with pip install pae or npm i pae . [1] S. M. Pincus. Approximate entropy as a measure of system complexity. Proceedings of the National Academy of Sciences, 1991. [2] “turned on flat screen monitor” by Chris Liverani on Unsplash
https://medium.com/thewulab/measuring-visual-complexity-with-pixel-approximate-entropy-996d6f5ab3b0
['Gabriel Ryan']
2018-09-06 00:41:27.623000+00:00
['Data Science', 'Data Visualization', 'Visualization']
React 2020 — P4: Functional Component Props
In the previous article, we created two functional components: Nissan and Toyota. We imported those components into the App component and rendered them. If we wanted to render other cars, we would have to generate a specific component for each of them. If you look at the two components that we created, you will see that the only difference between them is the string that’s rendered and the component name. Wouldn’t it be better to have a generic Car component that accepts the year, make, and model of a car as arguments and displays whatever vehicle we pass to it? Well that’s what this topic is about. Let’s create a Car functional component and add Year/Make/Model as the string placeholder. If we import the Car component into App and render it, we can see it displayed in our browsers. How do we pass arguments to components? Arguments are passed in the form of props. Props is just short for properties. Props allow you to pass custom data to the React component. Props will be passed to the component when we’re rendering the component. So, in the App component, we’re rendering the component <Car />. We’ll create custom prop names that will be attached to React’s props object, which React will make accessible to each React component. Let’s create a couple of custom props for our Car component. Props resemble HTML attributes to me. For example, the <img> tag has an attribute called src: <img src=””>. We’ll create three “attributes” for our <Car /> tag (located in src/App.js) and we’ll name them year, make, and model. <Car year="1990" make="Nissan" model="240sx" /> Each one of these props, year, make, and model, will be attached to the props object by React automatically. If you look at your browser, you’ll notice that nothing has changed since these properties are not being utilized anywhere. How can we display these arguments instead of the string “Year Make Model?” We’ll have to modify the Car component. In order to use the props object inside of a functional component, you’ll have to declare props as a parameter inside of your functional component declaration. function Car(props) { ... } Now, with React magic, you have access to the props object with all of your custom attributes. To access the props object inside of the JSX that’s returned, you’ll have to use curly braces. You use curly braces to access objects, variables, etc, inside of JSX. We’ll cover JSX in more detail in a later article. <div>{ props }</div> To access the property that was just created, you use the dot operator on the props object. For example, to access the year property, type in props.year. We can repeat the process for make and model. If you look at your browser, you’ll see that the arguments have made their way into the Car component. We can now create as many Car components as we want. If we look at our browser, we can verify that the 2 new vehicles are displayed. That’s all their is to it. We’ll look at creating class components and passing arguments to them in the next couple of articles.
https://medium.com/dev-genius/react-2020-p4-functional-component-props-c870d18175fb
['Dino Cajic']
2020-09-01 19:37:50.771000+00:00
['React', 'Reactjs', 'JavaScript', 'Programming', 'Web Development']
Top 5 Funny Swiss Expressions That Can’t Be Translated Into English
Top 5 Funny Swiss Expressions That Can’t Be Translated Into English Read it and I promise you’ll be “déçu en bien” The limits of my language mean the limits of my world We humans speak over 7,000 different languages. Each of them has its own peculiarities, its own quirks that provide its speakers with a unique understanding of the world. Each of them is a tool to create art and meaning in a separate, singular way. When one has the opportunity to learn a new language, one suddenly enters a parallel universe. Responding to Wittgenstein's quote above, one suddenly broaden the limits of one’s world. Every language has its idioms, grammatic and syntactic constructs, and pronunciation oddities that provide it with a character like no other. Every one of them has ways to express feelings, situations, or ideas that can’t be translated. I come from Switzerland, and my mother tongue is French, Swiss-French to be specific. It is the same language our neighbours to the West speak. A Swiss can understand and be understood 100% by a French. There are tiny differences here and there, however. We have an accent French people tend to make fun of. We also use older, out-of-fashion words for which they tend to make fun of us too. Overall, French people tend to make fun of us for pretty much anything. But we Swiss don’t hold it against them. We know they’re just jealous. Look at the picture above. We live in Rivendell, who wouldn’t want that! Our version of French is full of expressions that are not only untranslatable in English, but also incomprehensible for French speakers. Here are 5 of my favourites.
https://medium.com/curious/top-5-funny-swiss-expressions-that-cant-be-translated-into-english-65085ca9b198
['Nicolas Carteron']
2020-12-20 23:48:41.722000+00:00
['Travel', 'Society', 'Friendship', 'Culture', 'Language']
Peri-Menopause- AKA “The Abyss”
It seems to me, the lifespan of a woman can be divided into several distinct time frames: There is: The Age of Reproduction — approximately age 16 to 42 The Age of Menopause — approximately age 52 until forever ….and then there is that time period in between the two — about age 43 to 52, which can be referred to as the Age of Peri-Menopause. I lovingly refer to it as THE ABYSS. Why? Because an abyss is an endless hole — a deep and seemingly on-going chasm. An abyss also describes the “regions of hell, conceived of as a bottomless pit”. In my experience, as an ObGyn for the last 20+ years, this is exactly the way women in this age group describe what it feels like to be perched on the edge and almost falling in. When a woman is in The Age of Reproduction, it is fairly easy to understand the role of the gynecologist — it is often as simple as a choice between two opposite plans — Either “Help me have a baby”, or “Help me to NOT have a baby”……to boil it down to the simplest elements— fertility vs contraception. When a woman is in The Age of Menopause, it is also often fairly simple to come up with a plan, which will almost always include “Treat my menopausal symptoms”, and “Keep me healthy throughout my older years”. BUT — when women are in the Age of Peri-Menopause, it is almost always a complicated mix of seemingly random symptoms and processes that are in need of attention. I have heard the following: “I think I’m going crazy”. “My hormones are all out of whack”. “I cannot get out of bed”. “I have suddenly gained 15 pounds”. “I have constant fuzzy brain”. “ I’m in a bad mood all the time”. “I don’t know what’s happening to me!” THIS is the gynecologist’s complicated dilemma — — and the one where I have had to come up with a true plan of action to help women who find themselves “falling” into this hole. This truly is a time period where women are unlucky enough to have many of the well-known menopausal symptoms (hot flashes, insomnia, vaginal dryness, leaky bladder, low energy) and at the very same time (sometimes on the same day!) the shock of having periods that remind them of their teenage years — — heavy, irregular, painful, clotty and just plain annoying. It seems the pendulum swings up and back, one day causing depression, fatigue and the realization that the next stage of life has arrived, and the next day causing the return of all the hormonal rage and irritability that they thought had vanished with their youth. This truly creates the feeling of an ABYSS for many women. So what is actually happening? What can you do, if anything to prepare for this time, and to feel better as the hormonal and bodily changes start to occur? And how can you avoid feeling like you are always on the “edge of the ledge” and about to fall in? First, just a basic scientific explanation about why these changes start to occur around this time of life. Everyone has heard that when a woman reaches age 35, she is considered “elderly” from a reproductive viewpoint. We, as obstetricians, are not using this term arbitrarily, just to be mean to older mothers (although, I am sure some of us may be). We focus on age 35, because that is around the time that women naturally become less fertile. This is when they tend to ovulate less frequently, less regularly, or sometimes not at all. That makes it more difficult for some to conceive naturally, but that also has a very real effect on the body’s hormonal balance. Progesterone is the naturally-occurring hormone that is made as a result of ovulation. Fewer ovulations= less Progesterone. Less Progesterone = irregular periods, insomnia, bloating, and irritability. At this very same time, the ovaries continue making Estrogen, the other ovarian-produced hormone. Estrogen’s job is to build a lining inside the uterus, just in case there is a fertilized egg looking for a place to implant. (Your uterus doesn’t know that you’re getting older!!) So the Estrogen is produced, the lining builds up, but without the Progesterone the lining doesn’t know what to do. It builds up, sometimes for months at a time, and then falls off — -hence the well-known floods of bleeding that tend to occur in this period of time. Sometimes the lining just falls off a little-at-a-time — -hence the pattern of spotting or bleeding irregularly throughout the month — -or for what seems like months on end. There are pretty simple solutions to either of these problems (or any other bleeding patterns which seem to be a result of the production of an irregular amount of Progesterone.) Replacing Progesterone, in a cyclic fashion (for 14, or 21 days of the month), or on a daily basis, or even taking it in a birth control pill — or using other forms of hormonal contraception — are all solutions geared to tell your body — “I can control these hormonal fluctuations by taking or using the very hormones that my body does not know how to make in a proper cycle”. It is not “giving in” or “giving up”…..it is TAKING CONTROL of a body that is — at least temporarily — a bit out of control from a hormonal point of view. Of course, it’s not ALL about the hormones. As cliche as it may sound, lifestyle and general health habits will have a huge effect of the state of well-being at this time in a woman’s life. If there hadn’t been a strong foundation, and a commitment to take care of your body and your general health prior to entering the peri-menopausal years, then it will absolutely be more difficult to make it though these transitional years being and feeling well. I read a quote on a sign at a physician’s office recently. It said, “Those who think they have not the time for bodily exercise, will sooner or later have to find the time for illness”. So true. There is really no way around it — the studies all say so. The ONE lifestyle decision you can make to live longer and healthier is to exercise. You can exercise as much as the guidelines say you should — -150 minutes a week of cardiovascular exercise plus 2 days of strength training — or as little as you allow yourself to do. But you MUST get moving! Exercise, along with a healthy diet: (Low in processed foods and sugar, high in protein, vegetables and whole grains — and LOW IN ALCOHOL) provides the foundation to get your body ready for the menopausal transition. If you haven’t started before the hormones diminish, it will be that much harder to start a new program later on…but, really it’s never too late. Exercise to prevent weight gain, to keep your bones strong, to reduce your risk of cancer, to reduce your risk of heart disease, and to slow cognitive decline. Another extremely important component of health at any time in life, but especially at this transitional time, is stress management/stress relief. Women in mid-life, also known as the “sandwich women” are so busy taking care of aging parents, teenage children, jobs, and spouses that they are often exhausted with little time for self-care. Saying “yes” all the time is tiring, and takes a huge amount of time and cognitive energy. Stress management involves finding a passion — something that feels uplifting, interesting and enjoyable. Something that is just for YOU. It can be yoga, dance lessons, gardening, or socializing — -even just stopping for a 15-minute cup of coffee on the way home for work, before the barrage of evening responsibilities begin. There must be something that helps lower the stress in every day life, or the years moving forward and into menopause will prove to be less happy, less healthy and even MORE difficult, as the stress of changing hormones becomes an added burden. I find that women are often turning to the internet or to friends to find out which supplements that they should be taking as mid life approaches. They try herbs, potions, packages of vitamins, cleanses, “detox” regimens — -all in an effort to keep their bodies from aging. But most of these expensive regimens are just that — expensive and unproven, sometimes dangerous, and unlikely to provide any benefit. There are, however some supplements that DO at least have medical evidence behind them — they are few, simple, and inexpensive. I recommend that anyone who does not get needed Calcium from their diet take it in a supplement form. Recommended doses are 600mg daily. If not eating fatty fish 2–3 times per week, a fish oil supplement is a good idea (can be Omega-3 fatty acids, so as not to be burping up fish afterwards). Vitamin D, which is almost impossible to get from the diet, should also be supplemented, as many women are deficient and will never know it. 800 International Units daily is recommended. Other than those three, except in certain situations where there is a true deficiency or other medical condition, most other vitamins and minerals should be able to be obtained from a healthy, varied diet. Save your money! With a thoughtful combination of stress management, proper diet, exercise, supplements and hormonal balance, the transition from reproductive life to menopausal life does NOT have to be a decade of teetering on the edge of the ledge, swinging on a pendulum, or constantly worrying that there must be something terribly, horribly wrong. With kids growing up, moving away, and retirement looming, it can be the the healthy and exciting entryway to the decades where the focus will increasingly, happily and necessarily be on you!
https://rebeccaobgyn.medium.com/peri-menopause-aka-the-abyss-f1718dec1822
['Rebecca Levy-Gantt']
2018-05-06 20:16:13.451000+00:00
['Health', 'Perimenopause', 'Midlife', 'Womens Health']
An Instant Classic: “My Octopus Teacher”
An Instant Classic: “My Octopus Teacher” This documentary will change your life I have just watched a fascinating documentary about an octopus, but it has been so much more than I expected. “My Octopus Teacher” is a film about the ocean, underwater creatures, ecology, curiosity, creativity, wildlife, sea forests, spirituality, nature, and interconnectedness. Craig Foster is a diver and wildlife filmmaker who was in a dark and stressful period in his life. He begins his therapy by diving the dark waters of the ocean in Western Cape and learns to develop a deeper appreciation of his natural surroundings. He explores how he is connected to the world around him. In particular, Craig becomes obsessed with this octopus. One day, Craig sees a strange shape on the ocean floor: An octopus is covered with an armor of rocks and shells. He decides there is a lot to learn here and he dives and visits the den of the octopus every day. Craig decides to follow the life of this octopus for over 300 days. Eventually, the octopus realizes that Craig is not a threat, but a friend. The octopus allows Craig into her life. Both Craig and the octopus are very curious and it is a symbiotic dance of mutual learning and exploration. Craig spends a lot of time with the octopus, develops this amazing empathy and gentleness, and learns a great deal about life, nature, and interconnectedness. Watch the trailer here: Craig’s underwater journey is a breathtaking one. He finds a new passion to explore life beneath the water and connects with an octopus in a pretty amazing way. It is a love story and it shows you what happens when you care about, learn from, and interact with an octopus. It is such an emotional story that you find yourself mysteriously attracted to this strange underwater creature and this sea forest. There are all these heartwarming and heartbreaking moments about the life of this octopus. You learn how she becomes a hunter at night, how she saves herself from the shark attacks through the most incredible methods, how she interacts with her habitat, how she reproduces, and how she dies. You learn about the enormous creativity and intelligence of this animal that was sharpened through millions of years of evolution. The connection you feel is so deep and creative that you might find yourself crying. This film is truly a gift for your humanity, your curiosity, and your spirit. The cinematography is fascinating — it is a feast for your eyes and soul. It reminds you of how we are all in this wonderful world together and sharing the wonders of this life and nature. It is so easy to miss and forget the magic of all. We should boldly immerse ourselves within the natural world so that we become a harmonious part of it. Each life is very fragile, precious, beautiful, and interconnected. Therefore, we must cherish and protect the life of each living being on this planet.
https://medium.com/journal-of-curiosity-imagination-and-inspiration/an-instant-classic-my-octopus-teacher-6e7555a2026a
['Fahri Karakas']
2020-09-24 19:36:38.981000+00:00
['Nature', 'Creativity', 'Life Lessons', 'Culture', 'Life']
SqR00t Offensive Security Tech Talks
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog Square’s Information Security team runs a quarterly security meetup, Square R00t, which features several security- and data privacy-related lightning talks. Recently, Jason Haddix, Chris Rohlf, Jake Heath, and Michael Roberts gave infosec talks at our “Square SqR00t — Offensive Security” event in Square’s San Francisco office. (Thanks to everyone who attended!) Two of these talks were recorded and are now available on our Square Engineering YouTube channel. Enjoy! Tracing User Input Through JS is for Tools — Jake Heath and Michael Roberts Jake and Michael present a tool they wrote to automate the process of finding cross site scripting on a penetration test. Emergent Recon — Jason Haddix Jason presents his methodology for finding targets against a company that he needs to hack. If you’re interested in seeing more talks like these (or giving a talk!), be sure to attend our next Square R00t events! We have two events planned for Q4 2018: Our first Square R00t in NYC on November 14, 2018. Our next San Francisco event occurs on December. We’ll share more details on the SquareEng Twitter account once we get closer. Don’t forget to subscribe to the YouTube channel for more great content in the future!
https://medium.com/square-corner-blog/sqr00t-offensive-security-tech-talks-1353784216aa
[]
2019-04-18 20:59:06.640000+00:00
['Security', 'Infosec', 'Engineering', 'Privacy', 'Data Security']
Republicans Rally Around Climate Change, Not Trump, in Election Year
Republicans Rally Around Climate Change, Not Trump, in Election Year The Venn diagram circles of conservatism and climate just overlapped a little further this month, ahead of early voting. Political cartoon on Donald Trump tweeting, on numerous occasions, that the U.S. needs ‘good old global warming,’ by Brian Adcock | Political Cartoon Sometime in early March, before the pandemic locked down most of America, Arnold Schwarzenegger was making his way to Ohio to meet with John Kasich. Both had a few things in common: they each once governed some of the most productive states in the nation, and they were both Republicans. But the meeting, which took place just ten days after Donald Trump referred to the coronavirus as a “new hoax” made up by Democrats, would not be to talk about how to make America great again for another four years, but rather, perhaps unexpectedly, about how to make America better on the issue of the climate crisis. A bearded Schwarzenegger, the former governor of California who found himself sharing a stage with the former governor of Ohio, energetically told a room full of attendees at Otterbein University that the environment “isn’t something to fight over. This is something to fight for — for a clean environment and a green-energy future.” Kasich, hands clasped and with a more moderated tone than his California counterpart, signaled his support by pressing to the audience that “it’s all about us doing what we can to make a difference on this issue.” Six months after that meeting, a southern Republican and formerly elected official would write an open letter to conservatives. “Dear Fellow Conservatives,” began the letter, published Tuesday by Bob Inglis, a former congressman from South Carolina, “You are the most important people in the world when it comes to solving climate change.” Indeed, in a year already shaped by the extraordinarily unexpected, the Venn diagram circles of conservative politics and climate action just overlapped a little further this month, when early voting is set to begin in some states. Inglis may have partly been buttering up his audience, but he’s not entirely off the mark. Bipartisan effort is necessary to combat climate change, believes John Kerry, a Democrat who served as secretary of state in the Obama administration and once aspired for the presidency himself. In fact, Kerry was the mastermind behind the climate meeting between Schwarzenegger and Kasich, which was presented as a townhall event under Kerry’s new organization called World War Zero. The organization, Kerry has said, is aimed at bringing together “unlikely allies” across the aisle who believe that climate change is real. That would rule out Trump, who last year pulled out of the Paris Agreement on global climate action that Kerry helped to negotiate on behalf of Americans under the Obama administration. It’s a move not lost on Kerry. “It is clear to everyone of commonsense that the only way to solve this problem, ultimately, is globally,” he said in a paneled interview last month on climate. “And yet the United States of America is doing the exact opposite — insulting leaders, praising demagogues who are on the opposite side of the issue, and worse, pulling out of the W.H.O. and pulling out of the Paris agreement,” he added, warning that “we’re at a very, very dangerous, critical moment for our nation.” While Kerry was unequivocally condemning Trump, his fellow panelist, Rafael Reif, president of Massachusetts Institute of Technology, was unequivocally condemning the divisive state of politics. “Some issues are not political, and I think climate change is one of them,” he said, while recognizing that “unfortunately” the climate crisis has already been politicized. And research shows that it has only gotten worse under Trump. A nationwide poll last year found partisan polarization on the environment has grown under the current president. “Support for efforts to protect environmental quality, once viewed as a ‘uniting issue’ when such efforts became prominent in the 1970s, is now characterized by strong divisions along party lines,” according to Gallup, which found that the once “modest variation” in partisan gap between Democrats and Republicans has “become enormous under the Trump administration” — which is led by a man who, long before he entered office, had an equally long history of calling climate change “bullshit” and a “hoax,” and of maliciously ridiculing scientists, in tweet after tweet. In his open letter, Ingris seemed to be combating some of this deep politicization of climate change among conservatives in a couple of ways. First, he reframed the issue as one that actually aligns with conservative values, writing that “we should be the first to come forward with free enterprise solutions to climate change. To do otherwise is to admit that we don’t really believe in the ingenuity of free people.” He, then, presented possible conservative solutions to the problem. “Conservatives aren’t so big” on regulations, he wrote, but they can be on incentives, and they definitely are on accountability. “We need to get climate action right before big government gets it wrong,” Ingris urged his readers, before trying one last time to depoliticize the issue: “The good news is the Left tends to support market-oriented solutions.” Although a conservative from the South, Ingris is not new to climate action. In 2012, he launched RepublicEn, an organization that describes itself as “a community of advancing free-enterprise solutions to climate change.” In July, a spokesperson for the organization, Jacob Abel, wrote a letter to the editors at the Charlotte Observer, applauding both Republican senators of North Carolina, Thom Tillis and Richard Burr, as well as other senators in the same party, for sending “a letter to Senate Majority Leader Mitch McConnell urging him to consider investment to bolster the clean energy economy and innovation.” About a month later, Ingris would publish his open letter to conservatives on Kerry’s new climate organization website. Today, World War Zero counts a range of Republicans as partners, including Meg Whitman, who once ran Hewlett Packard Enterprise and is now the chief executive at Quibi, John McKernan, a former governor of Maine, Hank Paulson, the former treasury secretary, and Cindy McCain, widow of the one-time Republican presidential nominee John McCain. (Whitman, McCain and Kasich all appeared at the Democratic National Convention last month, decidedly not in support of their party’s nominee and sitting president.) But it took some time for Kerry, who has long been working on climate issues, to reach this new stage — that is, to figure out the right way to get people to understand the gravity and immediacy of his work. “I will assume some blame,” he admitted in the interview, “for not having thought carefully enough in the very beginning about that message.” In those earlier days, he said, “there was a lot of talk about ice, and ice melting,” and “polar bears moving.” But what it lacked was “relating it somehow to people’s lives.” Which is why, perhaps, the most telling thing about the RepublicEn’s letter to the Observer’s editors is not the action taken by Republicans, but rather, by that of the author. Abel, the spokesperson, as it happens, is an undergraduate student and, notably, “a young Republican from Concord who cares deeply about climate change, as well as U.S. energy independence.” Indeed, research shows that millennials of all walks of life are worried about the environment. A separate Gallup study last year found that among this age group, which is worth $1 trillion in consumer spending, seventy-three percent say they would spend more on sustainable products. In fact, the study further noted, “millennials’ concerns about global warming is at a high point.” This potentially means a renewed opportunity for Kerry, Ignis, and others working on climate mitigation and adaptation efforts to get the messaging right this time. “Bottomline is,” said Kerry, “we have to open up that new conversation, and it centers around jobs and the economy.” Schwarzenegger succinctly put it another way. “We all drink the same water.”
https://medium.com/climate-conscious/republicans-rally-around-climate-change-not-trump-in-election-year-b7aaf1838758
['David Montalvo']
2020-09-04 14:01:03.862000+00:00
['Climate Change', 'Environment', 'Donald Trump', 'Culture', 'Politics']
Dynamically Adding Lines to a Plotly Plot in R
Since Plotly made their interactive graphing platform available for R, I’ve been trying to incorporate it into more of my projects. Recently I was working on a graph that included the price of a particular stock as well as multiple moving averages and thought it could be useful to use vertical lines to help better define buy and sell points for the viewer. While ggplot2 allows users to pass a single vector of values into the geom_vline() argument to do this, Plotly appears to require a separate list of arguments for each line (possibly as a result of the complete control they give you over the positioning of shapes). Below is a snippet of the data I used to build the graph. Full data available here. A snapshot of the full GE price dataset Below is an example of the arguments necessary to build a vertical line from the x axis to the adjusted price for GE on 2–11–1997. p <- layout(p, shapes = list(type = “line”, fillcolor = “blue”, line = list(color = “blue”), opacity = 0.3, x0 = “1997–02–11”, x1 = “1997–02–11”, xref = “x”, y0 = 0, y1 = 10.22, yref = “y”)) Obviously drawing more than a couple lines in this manner would be very clumsy. My goal was to point out crossover points between the 2 EMAs, which means a non-programmatic approach would be very impractical. To make this task easier, I created a simple “for” loop to identify crossovers indicating a “buy” (in green) or a “sell” (in red). First I used dplyr to filter out all the days where the EMAs did not cross: line_dat <- dat %>% filter(dat$signal != '--') Then I created a list of lists, with each of the nested lists containing the arguments for one line. To make it easier for the end user to distinguish between “buy” and “sell” days, I used an if-else statement to alter the line color for each iteration depending on the signal. line_list <- list() for(i in 1:nrow(line_dat)){ line_color <- ifelse(line_dat$signal[i] == 'buy','green','red') line_list[[i]] <- list(type = “line”, fillcolor = line_color, line = list(color = line_color), opacity = 0.3, x0 = line_dat[[1]][i], x1 = line_dat[[1]][i], xref = “x”, y0 = 0, y1 = line_dat[[2]][i], yref = “y”) } Below is the resulting graph I’ve already found other uses for Plotly at work, and I’m excited to see how I can work it into my R workflow.
https://medium.com/zappos-engineering/dynamically-adding-lines-to-a-potly-plot-in-r-546ace6e626b
['Raphael Fix']
2016-02-16 20:14:50.507000+00:00
['Plotly', 'R', 'Data Visualization']
What’s an API? Explained with a 2 Min Example
What’s an API? Explained with a 2 Min Example That anyone can try out (in 2 minutes) The author really tried :,) I very recently got asked the question “What is an API” by a friend of mine who’s from a business background and when I explained it, I realized the only point in time when I ACTUALLY had a good understanding of what it is was when I made an API call. That’s why in this article, I’m gonna try explaining what an (Application Programming Interface) API is with an easy-to-follow example. But before that let me just copy paste the textbook definition of what an API is for the sake of completeness :) According to Wikipedia, An application programming interface is a computing interface that defines interactions between multiple software intermediaries. It defines the kinds of calls or requests that can be made, how to make them, the data formats that should be used, the conventions to follow, etc. There’s also this very common term buzzing around — REST API. REST stands for representational state transfer and is often contrasted with SOAP (a protocol that’s more complex). REST is a lightweight set of architectural principles and the APIs designed for REST (RESTful APIs) would return the request with data in these formats — HTML, XML, plain text, and JSON. JSON is most common and preferred since it’s easily readable by computers and also humans. We’d be focusing on REST API. Okies and with that, let’s get started! What we’re gonna need for this guide: Some online API data set (we’re gonna be using this link which returns you the design applications lodged with the IP office of Singapore). I’m using this because I find this site really beginner friendly and hassle-free. Something to make an API request. I’m using Postman since I already have it installed (and it’s interface is really easy to use too). But you can use an online API request tool or other applications like Insomnia. Alternatives for 1 and 2 will be listed at the end of this article. So if you click on this following link — https://data.gov.sg/dataset/ipos-apis, on page load you’d see: Fig 1 — Screenshot taken by Author from Data.gov.sg The fun thing with this site is that you can click on the “Try it out” button and try making the GET request right on this site. So after clicking that, Enter a date — Screenshot taken by Author from Data.gov.sg We get status 200 and the data returned in JSON format — Screenshot from Data.gov.sg That’s it. We just made the GET API request. What if the site we’re on don’t have such an interface? Then we’ll have to use an API requester. (I’m using Postman in this case coz the logo is really cute + user-friendly interface) On Postman, either create a new request or untitled request: Untitled request We then paste in the request URL and the query parameters as instructed in Fig 1. Paste in the query parameters and the request URL We then get a response and the JSON response is nice presented to us: Returns a 200 OK (means it’s successful) It may be worth mentioning that you TECHNICALLY can also just throw the link (https://api.data.gov.sg/v1/technology/ipos/designs?lodgement_date=2019-10-15) in your browser and you’ll get the same response: The problem is that it’ll look really messy and usually we’d make API calls in our applications or smth and we’ll process the JSON object/response we get back and do something with it. Other possible returns — Taken from AWS — link We got a 200 OK and that means that our API request got successfully handled and returned. Depending on what request you make, you can get other types of responses listed in the table above. There are other HTTP methods like POST, PUT, DELETE and PATCH but I find myself using POST, GET and DELETE most often. If you’d like to read more about these check out this link. I think one of the things that I found confusing at the beginning was the whole “Params”, “Authorization”, “Headers” and stuff. We’ve demonstrated how Params are used in our example above so I’d briefly mention one common use case of headers — for authorization. You saw how easily we just accessed that online data set through a simple API request above. What if we only want specific people to access it or make it more private? The answer is to use Authorization Headers. So the client making the request will have to provide the correct credentials when making the request. The server will check if it is correct or wrong. ONLY if the credentials are correct then a 200 OK will be returned with the requested response body. That’s it for this article! I really hope it was informative :)
https://medium.com/javascript-in-plain-english/whats-an-api-explained-with-a-2-minute-example-2c18b32b1103
['Ran', 'Reine']
2020-11-21 09:44:55.620000+00:00
['API', 'Software Engineering', 'Software', 'Programming', 'Software Development']
I Don’t Believe In America Anymore
I’m not so naive that I can’t see America’s been broken for a long time. If anything, the dumpster fire that is 2020 has only cemented the reality of the America that never really existed. I think a lot about that Langston Hughes poem, “Let America Be America Again.” Let America be America again. Let it be the dream it used to be. Let it be the pioneer on the plain Seeking a home where he himself is free. (America never was America to me.) Let America be the dream the dreamers dreamed— Let it be that great strong land of love Where never kings connive nor tyrants scheme That any man be crushed by one above. (It never was America to me.) O, let my land be a land where Liberty Is crowned with no false patriotic wreath, But opportunity is real, and life is free, Equality is in the air we breathe. (There’s never been equality for me, Nor freedom in this "homeland of the free.") Say, who are you that mumbles in the dark? And who are you that draws your veil across the stars? I am the poor white, fooled and pushed apart, I am the Negro bearing slavery’s scars. I am the red man driven from the land, I am the immigrant clutching the hope I seek— And finding only the same old stupid plan Of dog eat dog, of mighty crush the weak. I am the young man, full of strength and hope, Tangled in that ancient endless chain Of profit, power, gain, of grab the land! Of grab the gold! Of grab the ways of satisfying need! Of work the men! Of take the pay! Of owning everything for one’s own greed! I am the farmer, bondsman to the soil. I am the worker sold to the machine. I am the Negro, servant to you all. I am the people, humble, hungry, mean— Hungry yet today despite the dream. Beaten yet today—O, Pioneers! I am the man who never got ahead, The poorest worker bartered through the years. Yet I’m the one who dreamt our basic dream In the Old World while still a serf of kings, Who dreamt a dream so strong, so brave, so true, That even yet its mighty daring sings In every brick and stone, in every furrow turned That’s made America the land it has become. O, I’m the man who sailed those early seas In search of what I meant to be my home— For I’m the one who left dark Ireland’s shore, And Poland’s plain, and England’s grassy lea, And torn from Black Africa’s strand I came To build a "homeland of the free." The free? Who said the free? Not me? Surely not me? The millions on relief today? The millions shot down when we strike? The millions who have nothing for our pay? For all the dreams we’ve dreamed And all the songs we’ve sung And all the hopes we’ve held And all the flags we’ve hung, The millions who have nothing for our pay— Except the dream that’s almost dead today. O, let America be America again— The land that never has been yet— And yet must be—the land where every man is free. The land that’s mine—the poor man’s, Indian’s, Negro’s, ME— Who made America, Whose sweat and blood, whose faith and pain, Whose hand at the foundry, whose plow in the rain, Must bring back our mighty dream again. Sure, call me any ugly name you choose— The steel of freedom does not stain. From those who live like leeches on the people’s lives, We must take back our land again, America! O, yes, I say it plain, America never was America to me, And yet I swear this oath— America will be! Out of the rack and ruin of our gangster death, The rape and rot of graft, and stealth, and lies, We, the people, must redeem The land, the mines, the plants, the rivers. The mountains and the endless plain— All, all the stretch of these great green states— And make America again! — Langston Hughes, 1935
https://medium.com/honestly-yours/i-dont-believe-in-america-anymore-b8c6ec27dd8e
['Shannon Ashley']
2020-12-23 00:22:03.608000+00:00
['Politics', 'Money', 'Society', 'Life', 'America']
Vaginal Hysterectomy; What You Need to Know and How to Prepare
Vaginal Hysterectomy; What You Need to Know and How to Prepare An Obgyn explains this surgical procedure Our Preparing for series allows a patient to prepare themselves for a procedure properly. We answer questions about how long the procedure will last, what’s involved, what to expect, and even advice on packing your bag. While your surgeon preps, we’ll make sure you’re ready. What is a hysterectomy? A hysterectomy is a surgery to remove the uterus. Gynecologists perform hysterectomies for a variety of gynecologic conditions such as uterine fibroids, heavy periods, endometriosis, chronic pelvic pain, uterine prolapse, and gynecologic cancer. During a hysterectomy, a surgeon removes the uterus. Gynecologists often recommend removing the fallopian tubes (bilateral salpingectomy) to reduce the risk of ovarian cancer. Some women will also need the removal of the ovaries (oophorectomy). Removal of the ovaries triggers hormonal changes. After a hysterectomy, a woman can longer get pregnant. Gynecologists perform hysterectomies through a variety of techniques. The uterus’ size, the patient’s body type, and prior surgical history help determine the surgical approach. Techniques include: Vaginal hysterectomy Abdominal hysterectomy Laparoscopic hysterectomy Laparoscopic-assisted vaginal hysterectomy Robotic hysterectomy What are the advantages of vaginal hysterectomy? Vaginal hysterectomies are performed through a small incision at the back of the vagina. The uterus is slowly detached from the pelvis and then removed through the vagina. There is only a single incision inside the vagina; there are no abdominal incisions. Vaginal hysterectomy is a minimally invasive surgery that benefits patients by having only a vaginal incision, shorter hospital stay, faster recovery, reduced pain, and a shorter hospital stay. The American College of Obgyn states that a vaginal hysterectomy is the preferred minimally invasive approach because it is associated with better outcomes. However, some patients may not be candidates because of uterine size or prior surgical history. Your doctor will determine which approach is most suitable for you. Is hysterectomy safe? Hysterectomy is a very safe surgical procedure, and complications are rare. However, as with any surgery, problems can occur, such as: Fever and infection Heavy bleeding during or after surgery Injury to the urinary tract or nearby organs Blood clots in the leg that can travel to the lungs Breathing or heart problems related to anesthesia Death Some problems are seen immediately, and some may not show until days, weeks, or even years after surgery. These problems include the formation of a blood clot, infection, or bowel blockage. Complications are generally more common after an abdominal hysterectomy and in women with certain underlying medical conditions. How long will I be in the hospital? Surgeons perform vaginal hysterectomies as an outpatient procedure (meaning the patient can go home the same day) or inpatient surgery with an overnight stay. Various factors, such as the patient’s underlying health status, surgical complexity, and physician preference, help determine the surgical plan. Most vaginal hysterectomy patients can leave the hospital sooner than after an abdominal hysterectomy. Can my family visit me? A trusted family member should drive you to and from the hospital or ambulatory surgery center. Families are welcome to stay with you before and after surgery. Hospital visitor policies for overnight stays vary with the ongoing COVID-19 pandemic. Does my procedure require an anesthetic? A vaginal hysterectomy requires general anesthesia meaning patients will temporarily be put to sleep. The surgeon may also inject a local anesthetic into the incisions to decrease postoperative pain. Why do I need a preoperative clinic visit? Most surgeries will involve a preoperative visit with your surgeon to review the procedure’s risks and benefits and to answer your questions regarding the upcoming surgery. Because hysterectomies will eliminate the possibility of child-bearing, your doctor will confirm that you do not want children in the future. It is essential to provide your doctor with an updated list of all medications, vitamins, and dietary supplements before surgery. This will help us carefully review your medications and plan when to stop certain medicines, when the last dose should be taken prior to the surgery, and when to resume medications. This is particularly important for patients taking aspirin, blood pressure medicines, and diabetes medicines. Your doctor should review all medication and food allergies. We remind patients to avoid alcohol 24 hours before the surgery. If any blood work or preoperative testing is required, it will be scheduled and confirmed. If appropriate, share any lab work, radiologic procedures, or other medical tests done by other healthcare providers with your surgeon before your surgery. Some patients may need to supply a surgical clearance letter from their primary care physician. Finally, the doctor will give instructions regarding your diet before the surgery. Try to avoid wearing jewelry, make-up, nail polish/acrylic nails on the day of surgery. If you wear contacts, glasses or dentures, please bring a case. You should also confirm the date, time, and location of the surgery. What happens after I check-in at the hospital? After arrival at the hospital or Ambulatory Surgery Center, the staff will guide you to the preoperative holding area to change into a surgical gown and store your belongings. You will meet the nursing team who will provide care during your stay. They will review your medical history. The surgical consent form is also reviewed, signed, or updated with any changes. An IV will be placed at this time. You may be given special stockings to help prevent a blood clot. The anesthesia team will come to interview you and answer questions. Typically your surgeon will also review any last-minute questions. What happens in the operating room? After the preoperative evaluation, the team will guide you to the operating room. You will move from the mobile bed to the operating table. Monitors will be attached to various parts of your body to measure your pulse, oxygen level, and blood pressure. Then the anesthesiologist will give medication through your IV to help you go to sleep. The OR nursing team will cover your body with sterile drapes and apply an antibacterial fluid to your abdomen and vagina. After you are asleep, a tube called a catheter may be placed in your bladder to drain urine. The team then performs a “surgical time-out.” A surgical safety checklist is read aloud, requiring all surgical team members to be present and attentive. The gynecologist will insert a speculum into the vagina to allow visualization of the cervix, the opening of your uterus located at the back of the vagina. Once the speculum is in place and the cervix is visualized, the surgeon will grasp the cervix with an instrument called a tenaculum. This step helps us safely operate and avoid injury to surrounding tissue such as the bladder, rectum, intestines, and ureter. Then we work to detach your bladder from the uterus. After the bladder is safely out of the way, we begin to gradually detach the uterus from the pelvis. The surgeon will first focus attention on the uterine arteries. These two blood vessels are the main blood supply to the uterus and travel over the ureters, which connect the kidney to the bladder. Once the uterine arteries are controlled, the surgeon then safely gradually separates the uterus from the body. If indicated, then the tubes and ovaries are also removed. The uterus is delivered through the vagina and sent to the pathology lab for microscopic analysis. The surgeon examines all of the surgical sites for bleeding. The surgeon then sews the edges of the vagina closed to form the vaginal cuff. Once the procedure is complete, the surgical team completes a post-procedure review. All instruments and equipment are counted and verified. When finished, the anesthesiologist will begin to wake up the patient and then transfer her to the recovery room. What happens in the RECOVERY ROOM? Once the operation is over, you will be moved into the recovery area. This area is equipped to monitor patients after surgery. Many patients feel groggy, confused, and chilly when they wake up after an operation. You may have muscle aches or a sore throat shortly after surgery. These problems should not last long. You can ask for medicine to relieve them. You will remain in the recovery room until you are stable. As soon as possible, your nurses will have you move around as much as you can. You may be encouraged to get out of bed and walk around more quickly after your operation. Walking helps reduce the risk of blood clots. You may feel tired and weak at first. The sooner you resume activity, the sooner your body’s functions can get back to normal. What preparations should I make for aftercare at home? You should speak with your physician regarding the resumption of exercise and sexual activity. Sexual activity is typically restricted for 6–8 weeks to allow the vagina to heal. Do not insert anything into your vagina — no sex, tampons, or douching — until cleared by your doctor. Most women can return to basic activities in one to two weeks. Generally, we recommend patients stick to light activity only for the first 4–6 weeks. Light exercise helps your body heal and prevents some postoperative complications. Be sure to get plenty of rest, but you also need to move around as often as you can. Take short walks and gradually increase the distance you walk every day. Avoid strenuous exercise and heavy lifting. You may resume a regular diet on the day of surgery. It may help prepare some meals and do your grocery store shopping and laundry before surgery. You will be given instructions to help control postoperative pain during healing. Some pain is expected for the first few weeks after the surgery. You may also have light bleeding and vaginal discharge for a few weeks. Sanitary pads can be used after the surgery. Constipation is common after hysterectomies. Try a stool softener and fiber supplement. Some women have temporary problems with emptying the bladder after a hysterectomy. Some women have an emotional response to hysterectomy. You may feel depressed that you are no longer able to carry a pregnancy, or you may be relieved that your former symptoms are gone. Your doctor will schedule a postoperative examination 4–6 weeks after the procedure. After recovery, we recommend that continuing your annual routine gynecologic exams. Depending on your age and reason for the hysterectomy, you may still need pelvic exams and pap tests. DANGER SIGNALS Call your doctor or report to the ER if you experience: Pain not controlled with prescribed medication Fever > 101 Severe nausea and vomiting Calf or leg pain Shortness of breath Heavy vaginal bleeding Foul-smelling vaginal discharge Abdominal pain not controlled by pain medication Inability to pass gas This article was contributed by MacArthur Medical Center’s Dr. Reshma Patel
https://medium.com/beingwell/vaginal-hysterectomy-what-you-need-to-know-and-how-to-prepare-488f0c152781
['Macarthur Medical Center']
2020-10-27 21:17:32.827000+00:00
['Health', 'Surgery', 'Womens Health', 'Women', 'Hysterectomy']
What’s The Secret To Achieving Your Must-Have Outcomes?
Answer for Me Early in my career, I was amazed by how some individuals and organizations were successful while others were not. Some just stood out in the crowd. So, I begin to ask them why they were so successful. Were they smarter, or did they work harder? Were they luckier, or did they have it given to them? Nope. Nearly every one of them had learned the system for achieving outstanding outcomes day-in and day-out. They create OKRs (Outcomes and Key Results) for their lives and businesses. I discovered that being smarter, working harder, being luckier, and have others give it to you does help. But, it was not the cause of their success. Being clear about what they wanted — their outcomes — matter much more. Once they were clear about their outcomes, they determined how to measure them with crucial results. And finally, they kept score and measured these essential areas. Now, all that they did could be focused on their key outcomes and results. Amazing things happened then. I then began to apply this approach to my own life and career. E ach day I remind myself what my key goals or outcomes are. I then determine how I know when I have achieved them. I measure my activists and results on a daily and weekly basis. Over time, with work and focus, I perform them. I have had a fair amount of success in being a husband, father, friend, son, business leader, and community leader. I am not smarter, and I don’t work harder than most others. I am just more focused on my OKRs.
https://medium.com/illumination/whats-the-secret-to-achieving-your-must-have-outcomes-b6b3cbc6815f
['Randy Wolken']
2020-12-27 16:49:42.085000+00:00
['Leadership', 'Business', 'Self Improvement', 'Self-awareness', 'Life Lessons']
The Art Of Being Sick
My Prayer Shawl — A Gift From My Sister — Made With Love To Help Me Heal When you have a chronic illness you can go one of two ways with it. Either you embrace being sick and make it part of who you are, or you fight it — nearly going into denial regarding your health status. Neither place is especially good for you. As in all things balance is the best policy, but humans tend to live in extremes. With my asthma, I am a fighter, a denier of the highest order. I didn’t even go to a doctor for it until I was well into my twenties. It’s a miracle I survived on over the counter medication that long. But that’s what you did in my family — you sucked it up and you went on with your life. My father freely admitted he didn’t ‘believe’ in doctors. I once told him, “Dad, it’s not like they’re Santa Claus — they really do exist.”. He did not find my correlation the least bit funny. Even as a kid, I was a smart ass. But I digress. Finally, after seeing a general practitioner for my asthma and being placed on the current drug regime of choice for treatment, it still took another few years before I managed to find my way to an allergist to be treated for the root cause of the asthma. I was in my fifties before I actually landed in the office of a pulmonary doctor. Because I just carried on. I managed. I denied. I did not let my ‘condition’ slow me down or stop me or keep me out of work or from doing the things. All the things. Until it finally did. Completely. For weeks at a time. Because my body said ‘enough of this shit — you will listen to me.’ And I found my strength of will only would push me so far. I had to give my body the time and care she needed to recover. I had denied her needs and her healing for decades. I had taken care of everyone around me, but her. And don’t think I ever let anyone else take of her. Oh no — let me amend that — oh hell no! I was the mother, the nurse, THE caregiver. I did not accept care. It has only been very recently that I have been able to amend that and allow the people who love me to care for me. It’s an incredibly difficult thing on so many levels. It means you have to look your disease square in the face and admit you aren’t strong enough to do this alone anymore. There might have been a day — but those days are gone now. You and your body need some help with this, so yes, thank you all for being here for us. Because the warrior within is tired, defeated. We are raising the white flag. It means you have to admit to everyone who loves you that you need them. More importantly — that you need them more than they need you now. When you’ve been the caregiver forever and a day — the shift of roles is something you feel right down to the foundation of who you are. Who are you really if you can’t take care of other people? If you can’t even take care of yourself? Your sense of independence evaporates when you find yourself unable to open a bottle of ginger ale without sending out an S.O.S. It means allowing people to see you at your most vulnerable. Sick, short of breath, feverish, coughing, sucking on breathing treatments, coughing, so tired there is no moving — there is only being and sleeping. You have to learn to trust them with everything. With all the scary things. All the decisions you’ll be too weak to make about your life. And part of you knows at that point — you won’t even care, because you’ve been to that brink a couple of times already. But today — today you are getting well. You have given your body what it needs and she is rewarding your patience and love with health. You are getting better at taking care of her. You are getting better at telling the real world to fuck off and give her the space and rest she needs to heal. And you are doing it without guilt. You have let people who love you come to your aid, you have felt their love wrap around you like a warm shawl and that has added to the healing. You have learned.
https://annlitts.medium.com/the-art-of-being-sick-ccecf2a1e469
['Ann Litts']
2018-07-28 16:29:48.527000+00:00
['Life Lessons', 'Women', 'Nursing', 'Health', 'Life']
How to talk to children about climate change, without terrifying them
How to talk to children about climate change, without terrifying them Children are crucial to tackling climate change, but they’re also scared about their future. And fear leads to inaction. Children are crucial to our climate crisis. Teenagers, like Greta Thunberg, are now leading the way when it comes to climate action, educating world leaders on what needs to be done, and when (i.e. now!) The school strikes for climate taking place every Friday across the world demonstrate their passion, and their anger at the inaction of adults thus far. When it comes to younger children, though, the danger is that learning about the reality of climate change, and the devastating impacts it will have on our planet and on our people, could just terrify them. I recently attended an event with the Low Carbon Hub, where members shared their experiences and current community projects. Mim Saxl, representing the community group Low Carbon West Oxford, told us how her child came home from school one day incredibly concerned, having learnt that ‘my children are going to die in 2050’. We don’t want to end up scaremongering our children, or providing them with misinformation. We want to give them accurate information and empower them to do something about it, in their own way. So what’s the best way to talk to children about climate change? Mim’s solution was to set up Kids CAN (Climate Action Network), providing free ‘fact-based and empowering’ resources which can be used by teachers and parents when talking to their children about climate change. This includes their ‘fact buster booklet’ which explains the basics of climate science and greenhouse gas emissions in an accessible way (with lots of diagrams!) as well as actions that children can take to stop this happening. There is also a lesson plan and powerpoint presentation to accompany the booklet. Resources like this allow discussions of climate change to be integrated into the school curriculum in an accessible, seamless way, laying a foundation of knowledge without overwhelming children with every aspect of the science. An education in nature Photo by Annie Spratt on Unsplash It’s also important to allow children to connect with nature, and become curious about the planet in which they live, and the plants and animals they share it with. It helps them to understand why taking care of the planet is important, so that when they learn about the risks of global warming they will naturally care about preventing it. This might be through watching nature documentaries, visiting a wildlife centre or natural history museum, taking walks in nature, introducing them to different habitats and environments such as beaches, woodland, mountains, and more. Empowering action Photo by Markus Spiske on Unsplash As we’ve said, the danger is that children become fearful and disempowered when learning about our climate change. Instead, we want to empower them. One way to do this is through sharing stories with them of youth activists who are making a difference — and there’s certainly no shortage of them at the moment! Knowing that ordinary people, just like us, are taking action and driving change can inspire us to think about our own role in the future of our planet, and to act ourselves. Similarly, we can focus on talking to children about what actions we can take as individuals against climate change. Even the smallest things can start to make us feel empowered and want to do more. NASA have developed resources on ‘how to help’ as part of their Climate Kids project, which is a great place to start.
https://medium.com/age-of-awareness/how-to-talk-to-children-about-climate-change-without-terrifying-them-a4595738e03
['Tabitha Whiting']
2020-01-22 08:51:01.267000+00:00
['Climate Change', 'Environment', 'Education', 'Children', 'Teaching']
Trading your coffee cup
Trading coffee is not an easy job. In a recent trip to Vietnam, I had the chance to meet a coffee trader and visit his wet mill. First of all, let’s remember that there are two coffee species traded around the world; coffee arabica which is 60% of the traded coffee, and coffee robusta for the other 40%. Coffee arabica is mostly used for drinks while robusta even though it can be drunk as a low-quality coffee, it is more often transformed into medicines and food products. Stocks of coffee arabica are traded at the New York stock exchange while for robusta it is traded at the London stock exchange. These platforms are used to exchange coffee futures, which are stocks used by traders to limit their exposure to coffee supply changes and price volatility. Every time, my trader friend buys real coffee from farmers, he sells a coffee future; and every time he sells coffee, he buys a coffee future. Traders make money from the price difference between the value of coffee futures and the price of real coffee. Notice all the samples of coffee beans stored in plastic boxes? The biggest difficulty of my friend is that he must always check coffee prices on his phone until 1:30 in the morning in Vietnam, time at which London Stock Exchange closes. Vietnam being the first exporter of coffee Robusta, most of the transaction of my friend happens in London. Once the London Stock Exchange closes, my friend checks the closing price of coffee, writes it down and sends it to all coffee merchandisers in the country who buy coffee directly from farmers.
https://medium.com/environmental-intelligence/trading-your-coffee-cup-48cbe7e218d5
['Thuận Sarzynski']
2020-12-22 14:36:00.935000+00:00
['Supply Chain', 'Sustainability', 'Food', 'Coffee', 'Business']
Hidden Gems: Event-Driven Change Notifications in Relational Databases
Hidden Gems: Event-Driven Change Notifications in Relational Databases A powerful non-standard feature that developers should know about Look! A 500ct diamond just laying on the beach! (Image compliments of Chris Coe). Introduction Wouldn’t it be great if we could receive event-driven change notifications (EDCN) when data changes directly from the database without having to poll for updates? This feature is, in fact, available in some relational databases, but not all, as it’s non-standard functionality and not part of any SQL specification. In the three examples covered in this article, this functionality is expressed via the implementation of an interface that is then registered with the JDBC driver directly. This opens the door to a myriad of potential use cases that can be expressed without the need to poll and which do not require the developer to write infrastructure code to deal with changes in data and notifying interested parties. Instead, we can interface with the driver directly and listen for changes and, when they occur, execute whatever workflow we have in an event-driven fashion. A few examples where this could be helpful include: Caching (more on this when we cover PostgreSQL, see also Good Candidates for CQN) Honeypots for database tables — also see poison records Debugging problems Logging changes Analytics and reporting There are, of course, some consequences when relying on this functionality. The most obvious implication is that it’s a non-standard feature that ties the application directly to the database. I was speaking with Michael Dürgner on LinkedIn about an example implementation as it pertains to PostgreSQL, and he commented that: “[W]hile it’s definitely a great way to do this, one of the big drawbacks is that you move application logic into the RDBMS. Not saying you shouldn’t do it but make sure that you have people with deep understanding of the RDBMS you use on board since it’ll be rather unlikely your average software will be able to trouble shoot. Another huge challenge with this approach is continuous delivery since your RDBMS needs to be deeply integrated with your delivery pipeline.” I agree with Michael’s position, and keeping business logic out of the database tends to be a good practice. Projects that rely on object-relational mapping (ORM) tools such as the Java Persistence API (JPA) to generate the database schema directly from one or more object models immediately lose portability and simplicity when developers are required to add logic in the database tier which probably belongs in the application itself. If developers are not careful, they’ll end up having to use the same database for testing as used in production and this could easily lead to pain and regret. I proffer the following question to any engineer considering using EDCNs via the JDBC driver: can the application still function as intended without the inclusion of whatever it is that you’re building that relies on this functionality? If the answer is “yes” then what you’re doing is likely fine; on the contrary, if the answer is “no”, then this is a strike against using EDCNs and alternatives may need to be considered. Finally, this feature on its own is not a substitute for well-engineered message-oriented middleware (MOM), which typically provides out-of-the-box solutions for guaranteed delivery, message persistence, at-least-once/exactly-once delivery, delivery via queues and topics, strategies for flow control (see also: backpressure), and addresses fault tolerance and scalability concerns. The presence of these requirements could be a strong indicator that an approach relying on EDCNs needs to be reconsidered. Below we explore this functionality as it exists in the PostgreSQL, Oracle, and H2 databases; we also include some general comments on MySQL and its fork, MariaDB. Throughout this article, we rely on Java 13.0.2 and Groovy. 3.0.4 and include links to the various scripts on GitHub which contain extra notes pertaining to how to set up the required dependencies and any other preconditions necessary to run the examples.
https://medium.com/better-programming/hidden-gems-event-driven-change-notifications-in-relational-databases-a87a7bdb02ad
['Thomas P. Fuller']
2020-08-05 19:25:04.401000+00:00
['Database', 'Software Architecture', 'Software Development', 'Software Engineering', 'Programming']
Python Dictionary Methods — Explained
❓ What is a dictionary ❓ Dictionary is an unordered data structure that allows you to store key-value pairs. Here’s an example of a simple Python dictionary: This dictionary uses strings as keys, but the key can be, in principle, any immutable data type. The value of a particular key can be anything. Here’s another example of a dictionary where keys are numbers and values are strings: An important clarification: if you try to use a mutable data type as a key (e.x. list), you will get an error: In fact, the problem is not with mutable data types, but with non-hashable data types, but usually, they are the same thing. Retrieving data from a dictionary 📖 Square brackets are used to get the value of a particular key [] . Suppose we have a pair in our dictionary 'marathon': 26 : Again, you will get an error if you try to retrieve a value for a non-existent key. There are methods to avoid such mistakes, which we will talk about now. Adding and updating keys 🔑 Adding new pairs to the dictionary is quite simple: Updating existing values is exactly the same: Deleting keys ❌
https://medium.com/python-in-plain-english/python-dictionary-methods-explained-1149c6d9a532
['Mikhail Raevskiy']
2020-11-09 22:44:38.407000+00:00
['Programming', 'Data Science', 'Software Development', 'Python', 'Technology']
It’s Possible You Don’t Have Time for a ’90s A Capella Dance Medley
…by actual Danish people, but, you know, maybe you should go read The New York Review of Books, or something. Yes, there’s Ace of Base.
https://medium.com/the-hairpin/its-possible-you-don-t-have-time-for-a-90s-a-capella-dance-medley-2304c0d1bff8
['Nicole Cliffe']
2016-06-01 11:56:30.155000+00:00
['The Nineties', 'Danes Ftw', 'Music']
How Walmart Is Disrupting Primary Care in America
Last week, news broke that the retail giant is suspending its $35 online minimum, offering free shipping to anyone who forks up a $98 yearly subscription fee. Headlines described how Walmart+ was looking to compete with Amazon Prime in a war over the hearts and minds of online shoppers. That might be true, but there’s a deeper, underlying motive at play. Drug dealer ambitions. The move also enables members to receive free mail-order pharmacy delivery. Put that next to ongoing disruption across the healthcare landscape and the picture begins to paint itself. Walmart has a bold vision of becoming our “neighborhood health destination”, and they might just accomplish it. More importantly, it could mean better access and affordability for us. Live better by having more convenience Did you know that over 90% of Americans live within 10 miles of a Walmart? Probably not too surprising if you’ve been on a cruise in the suburbs. But imagine if you could go to Walmart to see a doctor, fill your script, and understand your insurance coverage. Well, now you basically can. The retailer is partnering with Oak Street Health on primary care clinics and offering Doctor on Demand telemedicine services to employees. They now also sell pet insurance and care. This would be a net-positive benefit for consumers, providing closer physical access, but also a one-stop-shop for most preventive care. Revisiting our recently expanded, broader definition of healthcare through the lens of social determinants of health, we can’t forget about what’s on the shelves. Books, games, technology, and vitamins. Putting healthcare providers and consumers closer to such resources is not just a sign of the times. It’s a signal of what’s to come. Walmart recently began testing autonomous deliveries with GM Cruise, which could mean our future drug dealer could be a self-driving car. Other players are emerging with different ways of tackling convenience, ranging from virtual pharmacist support to home-based care. We’re just scratching the surface. Save money by having better options in time According to Walmart, their clinic’s prices are 30% to 50% lower than what patients would pay at physician offices. Primary care office visits cost $40, annual checkups cost $30 for adult patients and $20 for children. Mental health counseling and routine vision exams cost $45 and dental exams with x-rays start at $25. Now you ask, what about drugs? Along with offering $4 generic prescription drug programs and longer-term fill savings, Walmart struck a deal with the pharmacy benefit manager (PBM) Capital Rx, giving members more transparency at the point of sale. This is a pretty big deal. Here’s why. Researchers recently uncovered that the list price (cost before discounts or insurance payments) for 14 top-selling drugs increased by 129% between 2010–2016, while median insurance payments only grew by 64%. Even after accounting for inflation, OOP spending or cost-sharing for members increased by 85% for specialty and 42% for non-specialty Rx. Insurance payments for these same drugs went up 116% for specialty but only 28% in non-specialty. This means that the costs of non-specialty drugs are increasingly being passed on to consumers. Which means that the direct-to-consumer model should be focused on reducing prices, engaging in preventive care, and searching for alternatives that result in better outcomes. In other words, if we’re going to be paying for this, we deserve to… Know what we’re paying for Get the best options to inform our decisions Benefit from exceptional service or convenience Just like what we’d expect from any other business, whether it’s Netflix, Amazon, or Apple. A cautiously optimistic future At the end of the day, let’s not forget that this is healthcare. Aside from being an incredibly complex system, it’s strongly influenced by politics, legislation, and corporate greed. However, I believe that the lowest common denominator, or standard of care, will rise. And much-needed consumer advocacy will not only help us, because we need it, but also the underserved communities who need it even more. Which means it’s not just healthcare, it’s the impact of innovation on our lives. All opinions are my own.
https://medium.com/swlh/how-walmart-is-disrupting-primary-care-in-america-abfa95e5ece1
['Sid Khaitan']
2020-12-25 22:56:44.182000+00:00
['Innovation', 'Disruption', 'Healthcare', 'Retail', 'Health']
MerzFiles #04: I am an AI-driven troll (probably).
MerzFiles #04: I am an AI-driven troll (probably). I greet you, my dear 45! So you are probably aware of my biggest sin — I begin series. And then I begin a new one, and so on. Sequential thinking? Anyway, I continue my series sometimes, oh yes I do. So here’s a deal: when we will be 100 subscribers, I’ll tell you more about my series. Today I just tell about “Our Research Facility”. I started this series on Instagram (are we following us?) years ago. It was a small portal into another dimension, full of mad scientists, mysteries, and other weird stuff. And now I will invade Medium with this stuff from an alternate reality. For example this: Meaning of Life Reports from our Research Facility No more, no less. Apropos, meaning of life. Here is my favorite, written by Artificial Intelligence (still GPT-2, back to 2019):
https://medium.com/merzazine/merzfiles-04-i-am-an-ai-driven-troll-probably-77cfc25b94a8
['Vlad Alex', 'Merzmensch']
2020-10-29 15:29:42.240000+00:00
['Artificial Intelligence', 'Art', 'Culture', 'Videogames', 'Transmedia']
I Joined a Daily Blogging Challenge for 31 Days
Only those who will risk going too far can possibly find out how far one can go. — T.S. Eliot Early in July, I joined the BYOB (Blog-Your-Own-Book) Challenge hosted by Shaunta Grimes. Unlike most writing challenges, it’s a four-month challenge. In July we picked a topic for our book and planned out the posts that we were going to write on the topic. August was for writing the actual posts. Yep, all 31 of them. Now it’s September and we are going to edit our posts. Then we’ll publish them in a book format in October. Everything sounded great on paper. The trouble is, I’ve never done a challenge like this before. While I did develop a daily writing habit a couple of years ago when I was writing my first book, I’ve since lost that habit — most likely on the same day that I handed in my final manuscript to my editor. Oh well. Since then, I’ve struggled to write regularly. And I’m not talking about writing daily — I’ve been struggling to write even once a week. Writing brings me a lot of joy — when I eventually get around to doing it. So, I thought this BYOB challenge would be a great kick up my butt to get me back into daily writing. Since I’m a new blogger, I had a very modest goal and I just wanted to finish 31 posts in August. Everything else was a bonus. Here’s what happened. What Went Well I finished writing all 31 posts. Yay me! However, only 27 of them got published. I had submitted some pieces to big publications and they didn’t get published by the end of August. I increased my curation rate from 50% to 67%. In July I only published 8 pieces and 4 of them got curated, so that’s a 50% curation rate. In August, 18 of my 27 published pieces got curated, and my calculator informs me that it’s a 67% curation rate. I honestly thought my curation rate would go down because of the increased volume of my writing, but apparently the curators liked my stuff more, not less. I started my own publication and I got into 6 new publications. I started my own pub Thrive Now, and got into a few big publications. I also got into some smaller but really cool ones that I’ve had my eye on ever since I joined Medium. They included: The Startup, The Writing Cooperative, P.S. I Love You, Invisible Illness, An Injustice! and Home Sweet Home. Now I have more “homes” for my new pieces. I wrote faster. I’m a painfully slow writer. It takes me on average 3 hours to write a 1200-word blog post. But my saving grace is my focus. If and when I choose to use my superpower, I can focus for hours on end and get into a flow state pretty easily. As a result of the challenge, I’m writing slightly faster now, so maybe 2.5 hours per post instead of 3. It’s still slow I know, but it’s progress! I wrote every day. And I still do. This is the biggest win of all. I wanted to form a daily habit, and now I have it. What’s more, I thought I would be so sick of writing by the end of August and probably need to take time off to recover from all the stress. But so far that hasn’t happened and I’m still writing every day. What Didn’t Go So Well I didn’t write “on topic” all the time. Yes, we were supposed to come up with a topic during our planning process and plan our posts around it. I did, but I didn’t follow my plan. My topic was perfectionism and I probably wrote 2/3 of my posts on that topic, while the rest of my posts were all over the place, ranging from relationships and parenting to gender equality and a couple of really left-field ones. I didn’t publish all 31 posts. The perfectionist in me didn’t like that. I wanted to write and publish 31 posts, damn it! But the reality was once I submitted my piece to a publication, I had no control over how long it would take for them to publish it. Oh well. I became slightly obsessed. I became a little obsessed with writing as a result of the challenge. I went to bed thinking about writing and woke up thinking about it, too. Some days I woke up in the middle of the night because I had just thought of a better phrase, subhead, or quote for one of my pieces. And depending on how late it was in the night, a few times I actually got out of bed, went downstairs to my computer, and made the changes right away! I got into trouble with my husband (well, a little). In my early days of building my coaching business, I often worked right through lunchtime and ended up getting into trouble with my husband. One of his frequent questions was, “Did you eat today?” I’ve since learned to put in strict boundaries around my work hours, and I only work on weekdays now. Because of the challenge, however, I ended up writing on weekends too. My husband complained that I was working harder than him (I’m a working mum — of course I work harder than him even before the challenge. But don’t tell him that), and he was feeling neglected. Should You Join a Daily Writing Challenge? Fortune favours the brave. — Virgil I think so, especially if you’re a new writer or blogger like me. You’re probably trying to find a writing rhythm that works for you, and writing every day for 30 (or 31) days is a great way to do that. However, what really made it work for me was that I made a commitment to the challenge. And that means two things: time and accountability. The Two Keys to Commitment I set aside two- to three-hour blocks of time every day either first thing in the morning, or first thing after lunch to write. I put my 3-year-old toddler in front of the TV with her favorite show, loaded up her food supply, and off I went. Child neglect? Maybe. But I made up for it by spending quality time with her before and after my writing time. I also did extra writing on the two days that she went to daycare, and on weekends when my husband was around to help out. The other three days of the week I just did the minimum to get my writing done. So, that meant shorter or less in-depth pieces. It wasn’t perfect, but I was okay with it. In terms of accountability, I joined a wonderful writing group and we had regular write-in sessions on Zoom where people just showed up for two hours and wrote. It was so helpful to have that level of accountability, and I only missed a few of those sessions during the month. As a newbie, I needed all the help I could get! Give it a try I’m only halfway through the BYOB Challenge, but I’m going to go out on a limb and suggest that you try a daily writing challenge. And if you want to increase your success rate, make sure you set aside time and have some accountability in place. Whatever happens, you’ll learn something incredibly valuable about yourself, your writing process, and your readers. And that’s priceless. Everything else — increasing your curation rate, doubling your followers, or getting amazing feedback — will be a sweet bonus.
https://medium.com/ninja-writers/i-joined-a-daily-blogging-challenge-for-31-days-on-medium-8f2d7a7e4da9
['Annie Huang']
2020-09-15 23:23:09.805000+00:00
['Byob', 'Business', 'Writing', 'Blogging', 'Writing Tips']
Hunger and survival in Venezuela
These boxes, the government claims, will feed a family of four for one week. They are supposed to be delivered once a month to all those who have signed up for the “Carnet de la Patria” — a controversial ID card that grants holders access to subsidised food. However, according to those who get the CLAP boxes, the food arrives spoiled or past its sell-by date, is nowhere near enough to last even a week, and never comes more than, if you’re lucky, once every six weeks. Around Cumana, seven hours east of the capital Caracas, people say the boxes arrive once every three to four months. Pilongo, Vallenilla, and other locals say the trucks still barrel through here daily — in convoys of as many as 40 — laden with precious food and never stopping for angered, hungry people. They recall how people started coating the road with oil so the trucks would skid into a ditch and then everyone would swarm around and loot them. “A population which is not well fed become thieves and will steal any food no matter what.” When the truck drivers wised up and took a diversion, people got metal strips with sharp teeth and laid them across the other road. Tires would blow out and trucks would still be looted. When the National Guard came and confiscated the metal strips, the community protested that they belonged to them. After a fight, the mayor agreed and returned the strips. As hunger grew around the country so did the number of incidents like these, leading Maduro to issue an edict that armed National Guards must accompany the government food trucks. This has given greater license to the much-feared National Guard, who locals accuse of being behind the bodies they say have been turning up on nearby beaches. The threat hasn’t stopped people. They just choose different trucks. “Malnutrition is the mother of the whole problem,” says Pilingo’s former teacher, Fernando Battisti Garcia, 64, talking from his home in the town of Muelle de Cariaco. “A population which is not well fed become thieves and will steal any food no matter what.” People call it “the Maduro diet”. “As soon as people see a big truck coming with supplies,” explains Pilingo, “they go into the street — men, women, even children — and stop the truck and take the supplies.” It happened just a few days ago, he says, adding that the National Guard has begun searching people’s houses and if they find anything — food, toilet paper, supplies — they take you to jail. So people have started hiding the goods in tombs in cemeteries, or lowering them in buckets into water tanks. “Everyone is just so desperate,” Pilingo shrugs. With their erratic and infrequent delivery of meagre, often spoiled goods, CLAP boxes have done little to address hunger. What they have done, however, is line the pockets — and secure the loyalty — of military and government officials. The US treasury estimates as much as 70 percent of the CLAP programme is victim to corruption, while accusations of military and government officials siphoning off millions of dollars and creating a lucrative food trafficking business and thriving black market have led to sanctions and intensifying international scrutiny. The CLAP boxes have also succeeded in creating dependency. As inflation continues to spiral upwards and poverty escalates — jumping from 81.8 to 87 percent between 2016 and 2017 — more and more desperate people have become reliant on them to supplement their impoverished diets. In 2018, one in two Venezuelans say CLAP boxes are an “essential” part of their diet, while 83 percent of pro-Maduro voters say that CLAP is their main source of food.
https://newhumanitarian.medium.com/hunger-and-survival-in-venezuela-db8218cec986
['The New Humanitarian']
2018-11-21 16:51:23.976000+00:00
['Long Reads', 'Crisis', 'Health', 'Humanitarian', 'Venezuela']
Thoughts on this election day
Thoughts on this election day As our belief in objectivity erodes, what can we create from the surreality of 2020? Photo by Jeremy Bishop on Unsplash Somewhere in the dissonance between the lull of quarantine and the hyperactive news cycle, the sheer farce of vitriolic politics set against the serenity of so many neighbors planting gardens and engaging in mutual aid, the smoke from the fires, the rain from the storms, dates and times meaning less, the hologram of digital communication, and the chronic nausea of watching our governing institutions corrode, this year has been feeling pretty damn surreal. I say surreal not only in the sense of weird or dreamlike, but in the rising feeling that I’m walking through a Salvador Dali painting: dripping clocks as time and reality melt around me, and less and less ability to take “reality” at face-value. Ours is an era of objectivity eroding. No longer does the news feel like some safe, stable, soothing voice of Walter Cronkite. The sense many of us lived under that the ground beneath our feet, from elections to institutions to social interactions to the economy, was somehow unshakeable and stable — that sense is resolutely gone. It feels like we’re standing on water. And that’s a good thing. From right to left, no one trusts the media anymore. We doubt our experts, our politicians, our anchors. Literally — our anchors, those whose job it is to anchor us to the ground of truth beneath the fluctuating ocean of lies and opinions and agendas — those anchors aren’t hitting ground. And it’s not because the news has gotten “worse.” It’s because that ground was never there to begin with. The stability and sense of objectivity we lived in was an illusion all along, and now we are slowly coming to terms with what it means to live in a world without it. Walter Cronkite and the media of old always had an agenda, a perspective, a bias. The thing is — we only really got one agenda, and so that agenda seemed objective. But the business of media has always been about framing our sense of reality, using data and events to tell us stories. There have always been a story and a story-teller in the mix, and even those whose only commitment is to truth can never perceive anything objectively. In this murky ocean of missing objectivity, we come to understand that everything is a perspective, and it always has been. That our institutions, our social order, our economy and our laws are only as strong as we make them. Our belief in them is all that grounds us to them. Our beliefs are the only ground there is, and beliefs always come from a subjective experience of the world. To me, it used to feel like chaos, but as time goes on, the chaos feels more like opportunity. I used to long for a return of that sense of stability and truth, some kind of resolutely objective compass to be provided for me so I could measure my own beliefs and actions against it. But no matter how widely I read or how many facts I take into consideration, I can have no compass but my own. The same goes for you. Realizing this is a good thing. In recognizing that there is subjectivity, not objectivity, in every perspective and every action, we come to recognize our power in crafting reality. These beliefs are not fixed. This information is not fixed. These institutions are not etched in stone, and even stone can crumble. When we release our belief in objectivity, and our fears of the seeming chaos of our systems crumbling, we move closer to truth. The belief we held that our systems of old were stable and resolute was never true. The understanding that everyone is acting from their own subjective experience is true. And as we wade into this new kind of truth, we flow into the recognition that we have so much more power to create the world than we thought we did. Our systems are melting like Dali’s clocks, and this is a good thing. Systems are like currents: patterns of behavior that are perpetuated enough to take on the ability to condition behavior themselves. Systems of governance and law, social codes and interactions, communication and information: these have only ever been conditioning mechanisms. They were only ever as strong as we were conditioned by them. This lack of foundation at first feels chaotic, and the perception of chaos terrifies the mind. How can it determine how to act without a belief in something fixed to stand on? But rather than ask that question with a tone of incredulous fear, I invite us, each and all, to ask it from a place of wonder and imagination. How can we determine how to act as our belief in objectivity crumbles? What will we do with this newfound realization that our systems, our culture, our institutions, our society and our lives are so much more malleable than we’d thought? What will we do with this recognition of power? Rather than viewing this moment as the ground falling out from underneath us, we can view it as the walls falling away from around us. Our sense of the possible expands, and we’re left in a place of unimaginable power to imagine and create the world as we truly want it. Whether you believe in “creating your own reality” from a New Age perspective, or simply are understanding that we create the society we live in through our actions and interactions, you cannot deny that we have power to shape our world. How will we shape it? What kind of world do we want? What kind of world do we want, but never believed before was possible to create? I do not believe that human nature is fundamentally loving and nurturing, and that in the absence of our dominating systems, everyone would just get along in freedom and harmony. I also do not believe that human nature is fundamentally vicious and competitive, and that the absence of our systems would rain down chaos and violence and brutality. Rather, I believe that human nature is fundamentally adaptable. I believe it is responsive to its environment and its experiences. My question for you, on this strange November Tuesday, comes from that old adage about wolves: there are two battling within you, one is loving and compassionate, the other, competitive and cruel. Which wolf wins? The one you feed. And so I invite you not only to imagine a world where your good wolf wins, but also to consider carefully what it looks like to feed it. What does it eat? What kinds of social and political institutions feed your good wolf? What kind of economy? What kinds of relationships? What work does your good wolf do? How does your good wolf talk to its neighbors? What life does your good wolf live? The erosion of our institutions does not necessarily signal a slide into brutality. It is nothing more or less than the erosion of beliefs that were only ever beliefs to begin with. And with that slipping away of foundation comes the opportunity to recognize our power to build our own foundations, in the ways that serve us and our communities and our lives. To quote the late Murray Bookchin, “The assumption that what currently exists must necessarily exist is the acid that corrodes all visionary thinking.” The breakdown of our systems and our societal narratives, and our belief in them as fixed and objective, need not be a time for fear. It can be a time of wonder. A time to build a world that is grounded in the truth: that we are powerful to make this world beautiful, and the things we let stop us are only as powerful as we make them. The breakdown of our belief in solidity is a chance for consciousness: to build consciously the lives and world we want to live in. The wolves are only as strong as they are fed. What does your good wolf eat?
https://medium.com/dogs-with-buddha-nature/thoughts-on-this-election-day-ff5f8d043292
['Anna Ronan']
2020-11-14 19:08:09.465000+00:00
['Politics', 'Society', 'Possibility', 'Consciousness', 'Imagination']
Why Chicago’s Mayor should reconsider social media monitoring
I’m an associate professor of social work and sociology at Columbia University and for the past 8 years, I’ve studied the social media communication of Chicago’s Black youth. Since completing my Ph.D. at the University of Chicago in 2012, I’ve spent countless hours with Black youth on the south and west side of the city, interviewed violence outreach workers and directors, and analyzed thousands of social media posts from Black youth who live in toughest neighborhoods in Chicago. I’ve made some mistakes in my research that include an overzealous approach to identify violent and aggressive speech from Black youth without first centering on their experiences, embracing their humanity, and seeking to understand why they were on social media in the first place. I have learned more from listening to youth than monitoring their social media communication. Here are a few thoughts that Mayor Lori Lightfoot and the future task force might consider: 1. Monitoring social media to capture “aggressive” communication isn’t a prevention tool. In the SAFElab we have found that threatening or aggressive conversations usually come days after calls for help. In fact, rarely did we encounter a threatening post that wasn’t attached to deep and complicated trauma. 2. AI (Artificial Intelligence) and other technology tools are extremely bad at deciphering context. In addition, humans who don’t share the same culture, language, or community history can also be really bad at interpreting language and context on social media. It is often the case that Black speech is misinterpreted as hateful or aggressive when using AI tools. 3. Social media monitoring, without consent, breaks down forms of trust that are needed to develop prevention strategies that keep people safe while protecting their humanity. 4. Social media monitoring is rarely if at all, deployed in predominantly White communities. Why is that? This new task force should engage in a PROP (Privilege, Race, Oppression, Power) analysis of any social media plans before deployment. 5. If you don’t do this right, it will inevitably become the new “stop and frisk” and eventually a digitally supported new Jim Crow or what Ruha Benjamin calls the “New Jim Code.” I do, however, think there are some potential opportunities. First, you should radically involve youth and community members in the decision-making process, technology development and integration of any technology tools that will live in their communities. Second, I do believe social media and AI (with consent) can help identify community strengths, can serve as an alternative needs assessment, and can surface critical needs that the Mayor’s office should attend to (e.g. public safety, COVID-19 testing, environmental conditions, etc). Third, this is the perfect opportunity for interdisciplinary collaborations that move outside of law enforcement. For example, partnerships with the READI program could help youth and young adults get involved in technology workforce development and training that allow community members to create safe and ethical technologies that work for their communities. The task force should develop a set of non-negotiables when using social media for public safety. This should include tough conversations around how do we keep everyone safe while protecting their privacy and their humanity.
https://medium.com/carre4/why-chicagos-mayor-should-reconsider-social-media-monitoring-8a68a49e0198
['Desmond U.Patton Phd']
2020-08-26 17:20:07.870000+00:00
['BlackLivesMatter', 'Chicago', 'Criminal Justice', 'Social Media', 'AI']
Healthy Lunch Ideas for Work
Every day, millions of people head to work. In fact, most of us will spend more time on the job than almost anywhere else during the day. With such a time commitment, it’s especially important that your healthy eating habits carry over into this environment, too. If you’re lucky enough to have a cafeteria at work, that might be an option for your everyday lunch (maybe even breakfast), but the truth is that preparing and bringing your own healthy options from home is almost always a better choice. And it’s easy to do! The problem with eating out There are plenty of options for eating meals while at work, but for many just bringing a lunch from home is the best way to maintain an optimal level of nutrition and health. The primary issue is simply that most people don’t have time to order a delicious, nutritious meal during short lunch breaks — and so fast food, vending machines, and employee cafeterias are generally the only options that are valid. But here is why those may be less than ideal choices. Cafeterias are low-cost for a reason: The idea that “you get what you pay for” applies at cafeterias. While they can sometimes offer healthy options, for the most part these low-cost foods are prepackaged and heavily preserved with sodium and other additives. And that an be an issue that leads to increased blood pressure. The idea that “you get what you pay for” applies at cafeterias. While they can sometimes offer healthy options, for the most part these low-cost foods are prepackaged and heavily preserved with sodium and other additives. And that an be an issue that leads to increased blood pressure. Not so fast with fast food: Those quick and simple fast food orders might get you out the door quicker, but many are filled with high cholesterol, extra calories, and loads of unhealthy trans fats. And that can lead to significant weight gain as well as up the risk for numerous health issues ranging from heart disease to diabetes. Those quick and simple fast food orders might get you out the door quicker, but many are filled with high cholesterol, extra calories, and loads of unhealthy trans fats. And that can lead to significant weight gain as well as up the risk for numerous health issues ranging from heart disease to diabetes. Not all snacks are created equal: Vending machines are a convenient way to get an afternoon snack, but almost everything inside is either sugary, processed sweets or salt-laden chips. Neither of these are good for the body because they cause blood sugar spikes and are difficult to break down internally. You might think relying on the simplicity of these quick fixes happens just on the rare occasion, but it can quickly turn into a cycle of convenience for many. After all, it’s easier to grab a burger than to pack a lunch — and with our hectic lives that convenience is sometimes enough to win out over health. These bad habits can end up being hard to break, but not impossible to do so with some effort. Why bringing your lunch is a healthier choice While there are some healthy dining options that you could choose, the fact is that bringing your own lunch from home is generally much healthier and can provide greater nutrition. Here are some reasons why it’s a good idea. Control over ingredients: Packing a lunch gives you total control over what goes inside it. So, you can monitor the addition of salt and sugar and other ingredients. That means vegans can ensure they’re having a meal free of any animal products or byproducts and those with gluten or dairy sensitivities can pack food that meets these specific dietary requirements. Packing a lunch gives you total control over what goes inside it. So, you can monitor the addition of salt and sugar and other ingredients. That means vegans can ensure they’re having a meal free of any animal products or byproducts and those with gluten or dairy sensitivities can pack food that meets these specific dietary requirements. Allergen control: Along with that, homemade lunches also make it easy to avoid allergens. Nuts, fish, and any other type of food allergy can be deadly if there’s accidental exposure, but when making a home-packed lunched there is no risk of contamination in any way because you can monitor all ingredients. Along with that, homemade lunches also make it easy to avoid allergens. Nuts, fish, and any other type of food allergy can be deadly if there’s accidental exposure, but when making a home-packed lunched there is no risk of contamination in any way because you can monitor all ingredients. Better nutrients: The best thing about homemade lunches is that you can use fresh ingredients and prepare different meals without loading them down with preservatives. And you can add a surplus of vitamins and minerals you need to intake more of. For example, if you need more calcium, you can pack a lunch with yogurt, nuts, and cheese. The best thing about homemade lunches is that you can use fresh ingredients and prepare different meals without loading them down with preservatives. And you can add a surplus of vitamins and minerals you need to intake more of. For example, if you need more calcium, you can pack a lunch with yogurt, nuts, and cheese. More environmentally-friendly: Another bonus of homemade meals is that they cut down on waste. Many people take leftovers to lunch instead of throwing them away, and you can invest in reusable containers that eliminates the need for styrofoam and plastic bags or other unnecessary packaging. Another bonus of homemade meals is that they cut down on waste. Many people take leftovers to lunch instead of throwing them away, and you can invest in reusable containers that eliminates the need for styrofoam and plastic bags or other unnecessary packaging. Economical: As another benefit, bringing a lunch from home often ends up saving employees money in the long run. Instead of a $10 meal from a restaurant, home-packed lunches often cost just half — if that. That means it’s possible to save as much as 200 dollars or more every month just by bringing a lunch instead of buying one. Best of all, not only do those who bring lunch from home get to enjoy more nutrition and better overall health benefits, but you get to have a delicious prepared lunch every day. By making lunch at home it’s possible to match up your tastes to a great meal, and it can be different on a regular basis — whether that’s sandwiches, wraps, sushi, soup. That’s unlike restaurants where you’re often at the mercy of whatever the restaurant is offering. Easy ways to pack your lunch every day One of the biggest hurdles to bringing your own lunch is often just time and simplicity. However, it’s well worth it and easy to do if you keep a few tips in mind. Here are some ideas that can help make it an ongoing habit. Go fresh Fresh fruits and veggies are a fast and easy snacking option that don’t take any prep work and can even serve in some cases as an entire meal. Keep plenty of fresh produce on hand to toss into a lunchbox. Make meals ahead of time While some things like sandwiches aren’t always best if they’re made ahead of time, there are plenty of meals that can be pre-measured and pre-made and then kept in the fridge until ready to take with you. Granola bowls, soups, and other similar meals are easy enough to make on a day off and store to grab and go. Mason jar salads and soups also make it easy by adding all the dry ingredients to the container — all you have to do is add the broth or dressing and you have an instant lunch. Know that leftovers matter When planning out your weekly dinners, think about lunch as well. There are numerous dinnertime meals that work perfectly as leftovers because they reheat great. Just make sure to package up leftovers in an airtight box to keep out air that can make them spoil quicker. Always think easy Fruit, granola bars, rice cakes, and other similar snacking options are important to keep on hand as well. They can be tossed into a lunchbox to ensure that unhealthy snacking isn’t something that entices you during the workday. Fall into a routine Humans are creatures of habit, which means that once a routine is established it’s much easier to stick with it. It will be simpler to pack a lunch each day if you set up a routine of preparing meals. Do so for several weeks and, before you know it, the practice will become automatic. Have the right containers Bento boxes make it easy and fun to prepare a great lunch every day. With separate, leak-proof containers you can bring a variety of sides with you, like salad with dressing, dips, and more. Easy recipes for creating great lunches every day Another key that will make bringing lunch to work every day a success is variety. Changing up your menu will keep you excited to bring your lunch. Here are some recipes that not only taste good, but are easy to make. Broccolini topped with fresh tomatoes and a coconut-based vinaigrette creates a side dish that can double as a fresh salad. Top with some nuts for extra protein for a light lunch option that takes just minutes to make. Hummus is more than a snack dip — it can be a filling main course, too, especially when paired with a plethora of dipping “utensils” like carrot sticks, cucumbers, pita bread, and crackers. This recipe combines traditional chickpeas with some pumpkin flavor for a handy to-go bowl. If you want a fresh, fun bento-friendly option that’s easy to make and filled with delicious nutrition, these rolls are a top pick. They are made from canned tuna and can be prepared a day or two ahead of time for a quick, easy lunch option. If your workplace has a microwave, leftovers can become a great lunch option — and few things heat up as well as stir-fry. This recipe cuts out the MSG and creates a low-carb meal with tons of veggies and protein that’s perfect for any workday. Make coworkers jealous with this vegan-friendly take on tacos — a handful of spices give these bites plenty of flavor. Just make sure to package tortillas separately from the ingredients, and then set up your own taco station at work. Soups heat up great and can be packed in a thermos or a sealable bowl. This option is creamy, but cuts the dairy so you can leave it out without spoiling. Keeping snacks in your lunchbox is important since a little pick-me-up is often needed throughout the day. This recipe makes it easy to give yourself a healthy snack that also tastes great, too.
https://medium.com/thrive-global/healthy-lunch-ideas-for-work-44f63644d69d
['Thrive Market']
2017-02-23 22:50:52.733000+00:00
['Work', 'Food', 'Recipe', 'Health', 'Healthy Foods']
Scaling OpenVidu
Architecture First of all, OpenVidu offers a "premium" paid version called OpenVidu Pro. This is the version we have deployed and that our telemedicine solution is using. We chose to use the paid version for two main reasons: it offers detailed monitoring of the video call sessions which is great for troubleshooting and analysis it offers a means for scaling it's media nodes — the server nodes responsible for streaming the video content OpenVidu Cluster Architecture from OpenVidu Pro Scalability page Deployment We deployed the OpenVidu Pro Cluster to our AWS account following their guidelines. It uses a parameterized CloudFormation Stack that takes in some configuration values — at Nexa we are very familiar with CloudFormation so this was a great fit to how we do things when it comes to infrastructure. Auto Scaling Problem If you read OpenVidu Pro's feature list you will notice that scalability is a manual effort and that elasticity is still a work in progress. Manual Scaling in OpenVidu Inspector from OpenVidu Scalability page "So what gives? Are you telling me I read up until this point just for you to tell me easily found information on how to press a button at a web control panel ?! Where's the automation?" — an understandably frustrated reader. Calm down fellow reader. I am going to show you how you can automate this process. I was just giving you some context. Just keep on reading it, it will pay off. One thing I omitted up until this point is that OpenVidu Pro has a REST API and it will play a major role in automating the auto scaling process: we are able to launch and drop media nodes using this API. Solution Context Let’s just list some things we know that will help us decide on a final solution to our auto scaling mechanism. OpenVidu Pro has a REST API that is able to launch and drop media nodes giving us the ability to scale nodes arbitrarily All video sessions will always have only two connected users: the patient and the doctor Video streaming is a CPU intensive process i.e. mainly CPU usage in the media nodes increases as the load — connected users which translates to sessions created — increases OpenVidu Server — master node — balances the sessions uniformly across it’s media nodes distributing the load Our telemedicine solution has fixed “office hours” from 8am to 10pm — this is a business rule From that we have: We need to scale the media nodes according to their CPU usage CPU usage will be uniform across all media nodes i.e. the average CPU usage will be practically the same as the CPU usage on any given node Finally we have what we need to formulate a solution. Upscaling Upscaling high-level architecture diagram CloudWatch Alarm which observes the CPU usage of the media nodes by their AMI Since the usage is uniform across the nodes, by observing the average usage of the nodes we are indirectly observing the usage in each node It is easier than monitoring each instance individually — would require an alarm per instance or polling telemetry data from the launched nodes which are dynamic SNS Topic as the alarm event destination By using a topic we can have multiple consumers Lambda subscribing to the topic Receives the alarm event and calls OpenVidu Pro's REST API to launch a new media node Should only launch a new media node if there are no media nodes in a launching status — verified by listing media node data using the REST API Note: to be able to use CloudWatch Alarms to monitor by AMI, the instances need to be launched with detailed CloudWatch monitoring enabled. In order to do that we need to modify the launch script that is used by the OpenVidu Server master node when launching media nodes. More on that in Part 2. Downscaling Given that "office hours" are fixed we can trigger a downscale to a single node after hours. It's not the ideal solution when it comes to cost management — since it could lead to having idle and/or underutilized media nodes during the day which we would still be paying for pointlessly — but it is by far the easiest one to implement. Downscaling high-level architecture diagram CloudWatch Event triggered periodically Lambda triggered from the aforementioned event Lists media nodes using the REST API Orders them by number of sessions descending Drops all except the first — most loaded — using the strategy when-no-sessions The Code In Part 2 we'll take a look at some of the source code of the autoscaling solution:
https://medium.com/nexa-digital/scaling-openvidu-fd2018fe5fa4
['Rodrigo Botti']
2020-06-03 21:59:43.521000+00:00
['Scalability', 'Video Streaming Server', 'AWS', 'WebRTC', 'Openvidu']
How to Be More Creative
How to Improve One of the Most Employable Skills Photo by Riccardo Annandale on Unsplash Creativity Can Be Learned I used to think creativity was about being good at art, and being able to do a really good job on a project for school. I also thought that some teachers were creative and able to plan the most exciting lessons and decorate the best classrooms while other educators just did not care to be creative. Playing games, visualizing, brainstorming, developing a mantra, taking risks, observing others, and ultimately celebrating the ways that you are already creative are ways to strengthen this character strength. Today I am realizing that creativity includes so much more than I originally thought. Creativity incorporates everything from strategies to cook new dinners to teaching kids to stretch their minds about different ways that regular household items can be used. Play the Creativity Game One of my favorite games to play with my kids is the creativity game. We take an object nearby and go back and forth thinking of things that the item could be used for. It stretches thinking and makes the time enjoyable. We have an ongoing game during dinner where we keep track of creativity points. It is boys versus girls. We take an object and for two minutes come up with as many creative uses as possible. The other team has to verify that the list is reasonable. Last night the items were a sandwich size plastic bag and a sock. Creativity includes depth of thinking and risk-taking. Visualize the Best Case Scenario Begin with the end in mind. Imagine the best thing that could happen. Now, get excited about that “Best Thing”. Visualize what it feels like, looks like, sounds like and any other of the senses that provide details to the height of possibility. Visualizing gives us perspective and new ideas of how to be creative. Everything is Figuroutable One of my favorite quotes is, “Everything is figuroutable”. Marie Forleo’s book, Everything Is Figuroutable has given me a mantra to change my perspective on what is possible. Yesterday, my daughter wanted to go as a character from the Harry Potter Series, and with an hour before the Halloween activities at her school, I had to figure out how to tie a tie. As I got frustrated and realized that I had no idea what I was doing after watching two different youtube videos, I kept hearing the phrase, “Everything is figuroutable” in my head. I believed that there was a way to figure it out. Determined, I found new videos, and after what was probably 20 minutes, I figured it out. Brainstorm Many Possibilities Even when you know how to tackle a situation or solve a problem, requiring yourself to think of other options will stretch your thinking. Sometimes it is fun to experiment with a less popular option. If the stakes are not that high it is okay to see what happens if you try one of the choices farther down the list. Do Things You Have Never Done Before I make it a habit of writing every day. This practice has made me more creative over time, but recently after reading several of the Haikus amazing writers have crafted, I decided to stretch my creativity and try one. Writing poetry is something I have only started doing in the last few weeks. Writing a Haiku that not only included the correct number of syllables but one that also made sense and even had the element of rhythm and beauty seemed near impossible at one point. After a little research, I looked up the format to remind myself of the structure as well as found a fantastic syllable counter online. Writing my first Haiku got a lot easier. The more Haikus I read the more I enjoyed my newfound creativity as a writer. I got encouraging feedback and even learned a lot about writing as I read the author notes that Tapan Avasthi ⛄️ ended his poetry with. Looking at an established poet’s work gave me great enthusiasm to write more. Observe Creative People Find creative people. Notice people who demonstrate the trait that you are looking to improve in yourself. When I put away my phone and pause to observe people around me, I can learn a lot. I am inspired, engaged and am provided with free resources. I might notice everything from how someone keeps busy waiting in the lobby at the hockey rink, to how a mom manages her four young kids in the grocery store. The little things that people do, the behaviors and the patterns can be observed provide interesting insight and might even help make life more productive. I also enjoy looking at other people’s resources. Some people have carefully packed bags that organize materials in the most amazingly. Other people are reading or utilizing resources that might be useful in the areas I am looking to learn more about. Recognize the Ways that You are Creative Everyone is creative. Creativity does not look the same for everyone, but everyone has a gift and ability to think differently about something. Focusing on strengths leads to confidence. Every area of life includes creativity. Kids who play video games are incredibly creative when they are deciding how to master a level. People who volunteer use creative skills as they decide how to best serve large groups of people, and provide resources that matter for a reasonable cost. And looking at a busy holiday schedule, trying to decide where we will go and when is related to our ability to be creative and think differently. Ultimately we are already creative, but like anything the more we work at it the stronger the trait becomes. Looking at creativity with a growth mindset allows us to see potential. Working on being more creative will help us in all areas of life.
https://medium.com/med-daily/how-to-be-more-creative-9f5d8d82c20f
['Laura Mcdonell']
2019-11-03 11:56:40.702000+00:00
['Parenting', 'Self Improvement', 'Haiku', 'Creativity', 'Education']
Best Libraries and Platforms for Data Visualization
In one of our previous posts we discussed data visualization and the techniques used both in regular projects and in Big Data analysis. However, knowing the plot does not let you go beyond theoretical understanding of what tool to apply for certain data. With the abundance of techniques, the data visualization world can overwhelm the newcomer. Here we have collected some best data visualization libraries and platforms. Data visualization libraries Though all of the most popular languages in Data Science have built-in functions to create standard plots, building a custom plot usually requires more efforts. To address the necessity to plot versatile formats and types of data. Some of the most effective libraries for popular Data Science languages include the following: Some of the most effective libraries for popular Data Science languages R The R language provides numerous opportunities for data visualization — and around 12,500 packages in the CRAN repository of R packages. This means there are packages for practically any data visualization task regardless the discipline. However, if we choose several that suit most of the task, we’d select the following: ggplot2 ggplot2 is based on The Grammar of Graphics, a system for understanding graphics as composed of various layers that together create a complete plot. Its powerful model of graphics simplifies building complex multi-layered graphics. Besides, the flexibility it offers allows you, for example, to start building your plot with axes, then add points, then a line, a confidence interval, and so on. ggplot2 is slower than base R and rather difficult to master, it pays huge dividends for any data scientist working in R. Lattice Lattice is a system of plotting inspired by Trellis graphics. It helps visualize multi-variate data, creating tiled panels of plots to compare different values or subgroups of a given variable. Lattice is built using the grid package for its underlying implementation and it inherits many grid’s features. Therefore, the logic of Lattice should feel familiar to many R users making it easier to work with. RGL rgl package is used to create interactive 3D plots. Like Lattice, it’s inspired by the grid package, though it’s not compatible with it. RGL features a variety of 3D shapes to choose from, lighting effects, various “materials” for the objects, as well as the ability to make an animation.
https://medium.com/sciforce/best-libraries-and-platforms-for-data-visualization-b986a43aee3f
[]
2019-06-25 16:28:13.750000+00:00
['Big Data', 'Technology', 'Data Visualization', 'Data Science', 'Programming']
But The Ghosts Came Anyway
Written by Matthew Donnellon is a writer, artist, and sit down comedian. He is the author of The Curious Case of Emma Lee and Other Stories.
https://matthewdonnellon.medium.com/but-the-ghosts-came-anyway-f312da1b600b
['Matthew Donnellon']
2019-05-26 02:26:43.095000+00:00
['Creative Writing', 'Poetry', 'Creativity', 'This Happened To Me', 'Poetry On Medium']
Safe vs Secure
When I hear “safety” in the context of software development, I’m always reminded that I need to automate my backup solution. The potential harm most software developers have to deal with is luckily only data loss due to hardware failures. Safety is resilience against accidential harm to people, property, or the environment. Aspects of safety in software development and operations include: Availability : A single server could be cut off the internet by construction workers, a hard drive could reach its end of life, a fire could break out, an earthquake could hit the region. : A single server could be cut off the internet by construction workers, a hard drive could reach its end of life, a fire could break out, an earthquake could hit the region. Scalability : Web services might go down when they are too successful. It’s sometimes called the “hug of death” when a huge website links to a small one. Or just when you have a successful marketing campaign. : Web services might go down when they are too successful. It’s sometimes called the “hug of death” when a huge website links to a small one. Or just when you have a successful marketing campaign. Integrity: All ways to send data are prone to errors. Package loss or bit-flips happen. We have error-correcting protocols to deal with that. Software can also be safety-critical, for example when you think about airbags. A defect there can put lives under risk (example). Other examples of safety-critical software include traffic lights, life-support systems, software managing the electrical power grids, industrial software which is used in machines that produce medicine, pacemakers, and many more. When we are talking about security in software development, we are thinking about hackers. Wikipedia has a pretty nice explanation: Security is resilience against harm caused by others. Aspects of security in software development and operations include: Accountability : You want to have a log of changes. This includes source code. In the version control system git, you can even cryptographically sign the changes so that others cannot tamper with your changes. : You want to have a log of changes. This includes source code. In the version control system git, you can even cryptographically sign the changes so that others cannot tamper with your changes. Availability : An attacker might run a Denial-of-Service attack (DOS). : An attacker might run a Denial-of-Service attack (DOS). Confidentiality : Man-in-the-middle (MITM) attacks that apply packet sniffing come to my mind. Dharmil Chhadva wrote a nice article about this topic. : Man-in-the-middle (MITM) attacks that apply packet sniffing come to my mind. Dharmil Chhadva wrote a nice article about this topic. Integrity: Think of a bank account. An attacker wants to increase the money on one account. Cryptojacking is an example where not data, but the software is affected. Measures to increase security include: TL;DR: Differences between Safety and Security
https://medium.com/plain-and-simple/safe-vs-secure-456ba5ebe95b
['Martin Thoma']
2020-09-27 07:34:08.231000+00:00
['Software Engineering', 'Software Development', 'Safety', 'Site Reliability', 'Security']
It’s Time to Break Up With the Pie Chart
It’s Time to Break Up With the Pie Chart Pie charts rely on our audience to decode quantitative information by comparing angles and area, which is actually quite difficult to do My old high school recently sent out their alumni magazine which featured the VCE (Victorian Certificate of Education) results of the Class of 2018. The report was lovely and well designed, but one thing, in particular, caught my eye. They used a donut chart to display the university courses that the students were accepted into. The chart looked a little something like this… I have changed the data slightly to deidentify the school. Pie chart of university courses So, what’s wrong with that you ask? Five years ago, I probably would have told you, “nothing, it looks great”, but the more I study data visualisation the more I understand why pie/donut charts are just trouble. Pie charts rely on our audience to decode the quantitative information by judging and comparing angles and area, which is actually quite difficult to do. Studies have shown us that it is far easier for us to compare length and position (bar and line chart), than area and angle (1,2). With a donut chart (which is just a pie chart with a hole in the middle — stay with me, don’t get hungry), we are forced to compare the arc lengths in the middle of the circle (3), which is even more difficult. Pie charts rely on our ability to compare area and angle. Donut charts rely on our ability to compare arc lengths All of these issues are further compounded when we start adding 3D charts to the mix, which distorts areas and angles. I have said it once and I will say it again, don’t use 3D charts! So, let’s work through this university course donut chart and see how we could improve it. Remember, our job as data visualisation specialists is to present the information in the most effective way so our audience is able to understand and interpret the data with very little effort. One solution, I hear you say, is to put the values directly in the donut chart. Let’s see what that looks like. Values placed in the donut chart As with any chart that has the key off to the side, our eyes have to dart back and forth to try and connect the colour to the donut slice. Have you also noticed the colour choices of the donut chart? The different shades of greens and oranges make it that little bit more difficult to match up the key to the slices. So, let’s now add the labels and the values directly to the donut, which is what I normally do. Labels and values in the donut chart Now it’s a little easier to read, maybe a little messier too. But the big question is, are we actually looking at the donut, or just reading the labels and values? In his newsletter, “Save the pies for dessert”, Stephen Few comments: “Why show a picture of the data if the picture can’t be decoded and doesn’t present the information more meaningfully? The answer is: You shouldn’t. Graphs are useful when a picture of the data makes meaningful relationships visible (patterns, trends, and exceptions) that could not be easily discerned from a table of the same data.” He is right, we get no more value from having the data presented as a donut with values directly labeled than as a plain old table. So how would I redo this? A horizontal bar. It might seem boring, but it is easy to understand and make comparisons between each category. As I mentioned before, we are much more capable of comparing length (bar) and position (line) than area and angle. Never underestimate the value of a bar chart. Horizontal bar chart I’ve ordered the data from biggest to smallest. Labeled each bar for easy understanding and got rid of the “noise” by just using the one colour. As the category labels were quite long, a horizontal bar worked better than a vertical one. I’ve also added a descriptive heading and shown the total 100% down the bottom. Compare the before and after. Which one is easier to draw meaning from? Pie charts might be aesthetically pleasing when done well. If you are going to use a pie chart, consider these four simple rules: no more than 5 categories; label the data directly; use colour sparingly; not suitable for all types of data. As I’ve shown you above, they are may not the most efficient or effective way of displaying data. Think of it with this analogy: it’s like buying a great pair of shoes that look amazing but are horribly uncomfortable, so you never wear them. What’s the point? It’s a hard relationship to break up. It’s taken me a lot longer than it should have. I even used a donut in last year’s annual report. But don’t feel bad, even Steve Jobs used one… in 3D! References: Few, S. “Save the Pies for Dessert” Visual Business Intelligence Newsletter, August 2007. Evergreen, S. Presenting Data Effectively, 2nd Edition. Sage Publications. 2018. Nussbaumer Knaflic, C. Storytelling with Data. Wiley Publications. 2015 Alana Pirrone is a Design and Data Visualisation Consultant and Design and Communications Coordinator at The University of Melbourne, Australia. Find out more about Alana at alanapirrone.com.au
https://medium.com/nightingale/its-time-to-break-up-with-the-pie-chart-860a9de8a1ec
['Alana Pirrone']
2020-09-17 13:06:01.046000+00:00
['Design', 'Charts', 'Data Science', 'Data Visualization', 'Pie Charts']
The MAGA Conundrum
Many Trump voters are the entrepreneurs who took a hit from the pandemic. The lockdowns that have compromised business. Mandates that have shortened hours and seating capacity. All of these dynamics strip their bottom lines. These folks are fighting to keep their businesses afloat. Hard-earned dollars to meet payroll and expenses. Any remaining profits are funneled back into their enterprise. Most are forgoing a salary, tapping their rainy-day funds. The litany of sacrifices and the continued risk they endure in order to survive. Even the business-to-business sector, feeling the crunch of their customers’ cash flow. All they’re doing with their vote is exercising self-preservation. Here is some of their internal dialogue: The government mismanaged my tax dollars. The government failed to solve a social problem and needs more money to fix it. Why do I have to pay for their malfeasance? How are I and my family supposed to survive? How could my business prosper, much less function under such financial stress and added regulations? Can’t we find politicians with a business acumen, understanding, and ability to plan? Who is looking out for me? Who has my back? These are the honest conversations we need to have. We all need to put ourselves in each other’s shoes. To listen and engage. To craft partnerships instead of tribes. It’s the current path of partisan politics that has poisoned our society. The body politic that has created our divisions. We should be working together, on a social problem by social problem basis. Instead of choosing sides, we need to find workable solutions. For the record, many of the Trump voters I have spoken with voted for Obama in 2008. Others supported Kerry in 2004. Their answers were the same: The country’s headed in the wrong direction. We need a change. I expect my taxes to go up, but it’s best for the country. Do these sentiments sound greedy, selfish, or racist? Hateful, supremacist, or xenophobic?
https://medium.com/illumination/the-maga-conundrum-bceae489783f
['Phil Rossi']
2020-12-03 11:01:23.646000+00:00
['Society', 'Culture', 'Politics', 'Trumpism', 'USA']
Open sourcing the IWillVote widget
Open sourcing the IWillVote widget Making the polling location lookup tool accessible to all IWillVote voting location widget in action in the 2020 Democratic primary The DNC is continuing to expand our voter education efforts by making the IWillVote.com voting location widget available to any entity that wants to use it! You can use a few lines of code to embed the IWillVote voting location widget on your website and customize it for your brand. You can change the font, colors, and more to make the widget your own. Available in English and Spanish, the widget allows a voter to enter their address directly on your website, then view up-to-date and accurate voting location options (including drop off locations, if available in that voter’s state) on IWillVote.com. Get the widget here: https://democrats.org/iwv-widget/ Default IWillVote widget During the primary season, presidential campaigns and state parties used the widget to make it easier for voters to find their voting locations. We’re excited to share the widget with any organization or person who wants it. The more voters that have accurate information about where, when, and how they can vote, the stronger our democracy will be. — — — — — Copyright 2020 DNC Services Corporation / Democratic National Committee The IWillVote widget and code referenced on this page is licensed under the Apache License, Version 2.0 (the “License”); you may not use this widget except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
https://medium.com/democratictech/open-sourcing-the-iwillvote-widget-1013d8203056
['Emily Dillon']
2020-10-08 21:34:14.665000+00:00
['Voting', 'Data Engineering', 'Democracy', 'Polling Places', 'Election 2020']
TypeScript With Fewer Types?
Outdated API Type Definitions Let’s say you are on a front-end team called Team Alpha (because front-end teams are the alphas… kidding) and you are consuming an API created by Team Bravo. Your team is large, so you went with TypeScript and laid out some type definitions for a particular API endpoint: OK, cool. Not only do we have type safety, but it’s easy to understand the input and output of the API endpoint. Fast-forward one week, and Team Bravo made a change to the /user endpoint. They sent a message in the api-news channel on Slack. Some people on Team Alpha saw it, while others missed it. Suddenly, your front end is breaking and you don’t know why. Here’s what was changed: Now, the types you wrote to protect you are causing massive debugging headaches because they are out of date. After a brief amount of time banging your head against the wall, you either finally read the update from Team Bravo or you find the source of the problem yourself. Maybe the first time is no big deal. You update the types and move on with your life. But how many times are you going to have to debug, update, and refactor? Probably forever. Ideal Solution If possible, a great “hands-free” solution is to use a type generator for your API schema. If you use GraphQL, you are in luck because there are solid options for type generators (GraphQL code generator, Apollo codegen, etc). This way, when the API changes, you can run a command to download the changes in request and response types. If you had the time, you could automate it to trigger downloads every time the API repo is updated, for example. That way, you are always up to date. If you are using TypeScript on both the client and server, then I think it would be smart to leverage that fact and share types downstream. It doesn’t matter how, just as long as you aren’t writing request and response types on the client.
https://medium.com/better-programming/typescript-with-fewer-types-f7fa636476db
['Kris Guzman']
2020-03-18 18:25:07.683000+00:00
['JavaScript', 'Web Development', 'Typescript', 'Technology', 'Programming']
La Isla Bonita
Last night I dreamt of San Pedro, Just like I’d never gone, I knew the song, A young girl with eyes like the desert, It all seems like yesterday, not far away… Tropical the island breeze, All of nature wild and free, This is where I long to be, La Isla Bonita… And when the samba played, The sun would set so high, Ring through my ears and sting my eyes, Your Spanish lullaby….. One of her career defining songs back in the 80’s. Still as fresh as early morning dew and like it just dropped today. Timeless classic. Check it out and have fun!
https://medium.com/blueinsight/blast-from-the-musical-past-5ec0368fd70f
[]
2020-11-29 14:30:43.376000+00:00
['Blue Insights', 'Culture', 'Songs', 'Artist', 'Music']
Is Trump’s Hate Actually Making America Safe for Anti-Racism, Democracy?
flickr.com This unfortunate and hopefully soon-expiring moment in U.S. history, Donald Trump’s presidency, has been anything but subtle in summoning the worst energies, impulses, and dimensions embedded in the nation’s history and still animating contemporary culture and politics. If it hadn’t been clear during his 2016 campaign when he characterized Mexican immigrants as rapists and criminals, or years earlier when he enthusiastically jumped on the Obama birther bandwagon, his overt, even celebratory, racism was laid bare in August 2017 when garden variety white supremacists marched in Charlottesville, Virginia, carrying tiki torches and chanting, “Jews will not replace us!” Trump, of course, insisted that at least some of these racist haters were, indeed, “fine people.” That racism existed in America wasn’t what was shocking, at least to many. What was shocking was Trump’s overt and unabashed expression, validation, and appeal to the nation’s racist values and instincts. A prominent view at the time was that Trump had emboldened bigotry and encouraged racial violence, that he was making America safe for racism. Indeed, his rhetoric arguably triggered mass racist shootings at the Tree of Life synagogue in Pittsburgh in October 2018 and at a Walmart in El Paso in August 2019 when a gunman admittedly targeted Mexicans and echoed many of Trump’s anti-immigrant and anti-Mexican talking points. Trump’s exorbitant racism, though, has in fact begun to have the opposite effect of, in fact, making America safe for anti-racism and democracy. While I wouldn’t go so far as to call this effect a silver lining of Trump’s presidency, I would suggest that Trump has functioned as a kind of unorthodox therapist, pulling up from the nation’s unconscious the reality of racism and genocide Americans have long wanted to ignore. The dog whistles were fine. The Republican party’s encoding of racism in terms of “state’s rights” and “tax cuts” in its infamous Southern Strategy gave Americans an out, an alibi for the criminal racist practices institutionalized in the nation’s political, economic, and social structures yet articulated at the same time as at odds with the nation’s supposedly most cherished values of equality and democracy. Racism, to some extent, could be kept below the surface; de facto segregation enabled white Americans to turn a blind eye to its reality. Out of sight, out of mind. Some of America’s best friends are Black. America doesn’t see color. Trump put an end to these evasions and dog whistles. Langston Hughes, in his 1926 essay “The Negro Artist and the Racial Mountain,” wrote about his Black community, ‘We know we’re beautiful. And ugly, too.” Without Hughes’ eloquence, Trump in his own un-poetic way has forced white America, in a way it had not in recent decades, to confront its beauty and ugliness at once, and to make a choice. Trump, in all his grotesqueness, does in fact embody the reality of racism in America, and he has held up a mirror to white Americans; they are having to come face to face with their own grotesque image. While the verdict is far from certain about how this current moment will play out in terms of motivating a substantial structural and cultural transformation of racist America, for the moment, because of Trump ironically, anti-racism is becoming a safe position in America. Like an alcoholic who has hit bottom and makes the choice to recover, white America may be hitting its bottom and beginning a difficult moral self-inventory as one of the many steps toward recovery from racism. Writer, comedian, and activist Baratunde Thurston voiced this viw that anti-racism is becoming a safe position when he appeared on MSNBC’s The 11th Hour, hosted by Brian Williams, just prior to Trump’s controversial rally in Tulsa, Oklahoma in June. He explained, The challenge now is we’re in an increasingly common moment, at least in awareness. That racism is still real, that police are out of control when it comes to Black people and the use of force. And it’s become so safe that the majority of the country, two-thirds, now sides with Black Lives Matter. It’s so safe that Mitt Romney feels comfortable going to a Black Lives Matter rally. It’s so safe that NASCAR has banned the confederate flag at its events. Yet this president doesn’t feel safe; he’s so insecure because he’s not really the president of all of us. He’s president of a small and shrinking group of extremists who are out of touch with the increasing consensus that there is a truth to this country we are ready to start to really reckon with. Thurston is clear that the reckoning with racism in America is just beginning. But getting a majority of Americans to agree that racism is a reality, to stop the mantra that we live in a post-racial society or that slavery happened long ago and we’re beyond it, is start. And this reckoning might not have happened in such an accelerated way without Trump constantly making loudly visible America’s racist reality. Having President Obama in the White House, we can all remember, enabled many to declare the end of racism in America. And maybe because Obama would actually address racial violence, such as he did with the murder of Trayvon Martin, white Americans were enabled to not speak out. They were let off the hook. Their elected leader spoke out; they didn’t have to. It was a political homeostasis. And now that the president actively sponsors racism, that homeostasis is maintained by the people speaking up. Thurston highlights how the lack of leadership at the top, in the White House, has actually pushed white Americans to speak up and out and coalesce with Black Lives matter. “What we lack in the White House,” he asserts, “We actually have in the streets.” “The people,” he says, “are re-imagining what democracy, what participation, what American could and should look like.” Back in August 2017, after Charlottesville, my spouse organized a rally in our northwest-side neighborhood in Chicago, where many police live and where racism is alive and well. She did so with some trepidation and with uncertainty about whether anybody would show up. Well over 300 folks from the neighborhood, of all ages and colors, showed up to sing, carry candles, and express their desire for a different world from that Trump promoted. As we walked the outskirts of the park, hundreds of drivers honked in support. People were hungry to give voice to their desire for a different world, for a racism-free world. They were waiting to come out of the woodwork, but they didn’t feel safe. Anti-racism wasn’t necessarily a safe position in this neighborhood. It’s becoming so, increasingly. This isn’t to say virulent racism isn’t alive and well. It means people are becoming emboldened in speaking out against it and resisting it. Trump has made neutrality and ignorance no longer viable. We’ll see how and if American democracy plays out.
https://medium.com/the-national-discussion/is-trumps-hate-actually-making-america-safe-for-anti-racism-democracy-57e67fb8df86
['Tim Libretti']
2020-10-29 17:27:34.836000+00:00
['Society', 'Politics', 'Equality', 'Race', 'Racism']
Get, Set… STOP!
What if I told you Morpheus never said: “What if I told you?” I know you saw that meme a thousand times but have you actually heard him say it in the movie? You didn’t, because it didn’t happen. Darth Vader also never said “Luke, I am your father” and Captain Kirk never said, “Beam me up, Scotty”. People keep repeating these “quotes” so we assume they are true. Cargo Cult This phenomenon is not limited to spoken word — it afflicts human behaviors also. Since software is developed by humans (for now), we come across it in software development. It’s called cargo cult programming and Wikipedia defines it as: Ritual inclusion of code or program structures that serve no real purpose. One practice that is widespread among developers is the creation of getters and setters. They always come together and we create them as soon as the entity is created. This is so deeply ingrained in our workflow that nowadays IDEs offer to automatically generate these getters and setters for us. But is there a reason for doing this or is it just a ritual we have accepted without stopping to wonder why? Set the lights to off Let’s look at the *setters* first. By setters I mean methods like this: public function setName(string $name): void { $this->name = $name ; } This method clearly changes the name of the object in question. In the real world, we call this renaming. I have never heard someone say that a person has “set its name to another name”. Even if you ask the developer who wrote the method chances are they will tell you that it renames the object. So, here’s an interesting proposition for you: if renaming is what the method does, call it rename. public function rename(string $name): void { $this->name = $name ; } Now our method name describes precisely what our method does and we used a domain-specific word for it. The (domain) concept of renaming a user is reflected in our code. This gives developers and non-developers a common (ubiquitous) language when discussing the project. Let’s look at another example. Say we have a `Subscription` class with the following setter method: public function setActive(bool $active): void { $this->active = $active; } As with the previous example, this method doesn’t clearly reflect the domain concept which is activating and deactivating a subscription. So we split it into: public function activate(): void { $this->active = true; } public function deactivate(): void { $this->active = false; } Again we have expressed these concepts explicitly in our code; furthermore, there is another benefit gained by this refactoring. Having a setter method allowed the outside world to set the internal state of the Subscription object, whereas now, the Subscrition object is responsible for setting its own internal state. Anemic vs. Rich By having myriad setter methods on your entity, you end up with what’s called the anemic domain model. class User { public function setName(string $name): void //… public function setBirthDate(DateTime $date): void //… public function setAddress(Address $name): void //… } Your User entity is more of a data structure than an object. Data structures expose their data and have little or no behavior while objects protect their data and expose behaviors. To have a richer domain model, we refactor this to: class User { public function __construct(string $name, DateTime $birthDate, Address $address) { $this->name = $name; $this->date = $birthDate; $this->address = $address; } public function rename(string $name): void //… public function updateAddress(Address $address): void //… } Now our User class is protecting its internal state (invariants) and exposing only behaviors to the outside world. “Get me your name” Now, to the getters. From my experience, this is harder for people to get (pun intended), and, to be clear, by getters I mean methods like this: public function getName() : Name { return $this->name; } This method returns the name of the object in question. Many developers would say this is clear from its name. The get + Name tells us it returns the Name. Actually, the “get” prefix guarantees nothing. In actuality, it is the return type of the method that guarantees this method returns a Name. If you ditch the get prefix in the method name then you lose no information in the process: public function name() : Name { return $this->name; } If you can remove a word without losing any information, then that word is just noise. But this is how we talk, right? Not really. Recall from memory a situation where you need to reveal some personal information about yourself such as being in a bank. When the bank clerks ask you for your address, do they say “Get me your address” or “Get me your name”? They don’t. In fact, when we use the word get in real life it usually isn’t a question but an imperative that has some implied side effect. Say you want a beer so you tell your friend “Hey, get me a beer from the fridge”. The side effect of him getting you a beer is that the freezer will have one less beer afterward. This will happen every time you call him to get you a beer. I’ll admit, it would be nice if beer fridges would follow our programming conventions of “getters” not changing the internal state of the object, but they don’t.
https://medium.com/we-are-madewithlove/get-set-stop-8f0dcde4323d
['Zvonimir Spajic']
2020-06-02 17:59:26.307000+00:00
['Software Development', 'Software Engineering', 'Programming', 'PHP']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#58a3
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dash', 'Dashboard', 'Data Science', 'Data Visualization', 'Towards Data Science']
Colorizing a Visualization
Colorizing a Visualization Using Harmony and the Albers App to achieve color harmony in your data visualizations The flow chart above diagrams the process of colorizing a visualization using blank templates from the Interaction of Color app, an invaluable tool both for learning about color and for designing visualizations. Before I get into that, though, I need to explain Color Harmony, a methodology for choosing colors that work together in creating an image. As examples, I’ll demonstrate how I’ve used Interaction of Color to build color schemes for a treemap visualization and a bubble chart. After that, you’ll see how one of our examples fails to address color deficiency and how to address this issue by using text in addition to a color scheme. Interaction of Color is available for the iPad in the App Store. There is a free trial version and a complete version for purchase that costs $13.99. Another tool used in this exploration of color is Adobe Capture, which I use for verifying Color Harmonies. This is a free app available online from Apple’s App Store and Google’s Google Play site. The app works on Android, iPhone and iPad devices. Information visualization examples are created with Tableau Public and Coblis — Color Blindness Simulator is used to evaluate our visualization examples for color deficiency considerations. Color Harmony First, some background on the centuries old concept of Color Harmony. Color Harmony is the process of choosing colors that work well together in the composition of an image. Similar to concepts in music, these harmonies are based around color combinations on the Color Wheel that help to provide common guidelines for how color hues will work together. Color Wheels are tools that depict color relationships by organizing colors in a circle to visualize how the hues relate to each other. Isaac Newton is credited with creating the Color Wheel concept when he closed the linear color spectrum into a color circle in the early 1700s. Over the centuries, artists and color scientists amplified his concept to include color harmonies. Below we show the Red-Green-Blue Color Wheel based on the concept that Red, Green and Blue (RGB) are the color primaries for viewing displays like what we see on our mobile devices. Two colors that oppose one another on the Color Wheel are considered to form a complementary Color Harmony. Three colors that are adjacent to each other on the Color Wheel form an analogous Color Harmony. There are many more Color Harmony fundamentals, but we will focus on these two basic ones in an illustration below. Red-Green-Blue (RGB) Complementary and Analogous Color Harmonies Color Deficiency Issues Color vision is possible due the cones or photoreceptors in the retina of our eyes. In humans, the existence of three types of photoreceptors or cones where each is sensitive to different parts of the visual spectrum of light provides for a rich color vision. The first set of cones absorbs long waves of light in the Red range. The second set of cones absorbs middle waves of light in the Green range. The third set of cones absorbs short waves of light in the Blue range. If one or more of the cones does not perform properly, a color deficiency results. A red cone deficiency is classified as protanopia. A green cone deficiency is classified as deuteranopia. A blue cone deficiency is classified as tritanopia. There are online tools or apps that attempt to simulate color deficiencies and allow us to check how our visualizations might be viewed by the approximately 4 percent of the population that might have a color deficiency. For our visualization examples here, we use the online Color Blindness Simulator, Coblis, to check for color deficiencies. In addition to modifying a color scheme that fails a color deficiency test, another way to help viewers easily comprehend a visualization is to present the data with double or more visual channels. Using both text and color to redundantly label data elements is one method of applying this concept, which you will see in both of our examples below. This becomes especially helpful in Example 2. Interactions of Color In 2013, Yale University Press released an app for the iPad based on the book Interaction of Color. The book was originally published by Josef Albers fifty years prior in 1963. The mobile tool is designed to be a near-digital replica of the book, including the implementation of the original Bakersville typeface and layout of the text columns with twenty-first century upgrades such as an interactive color wheel. The app provides blank templates that serve as exercises for you to directly and interactively learn the concepts Albers discusses. Original examples are provided but you can create your own designs. The app was designed and implemented by Potion Design Studio under the direction of Yale University Press and the Josef and Annie Albers Foundation. Josef Albers designed the Interaction of Color book as a handbook and teaching aid to explain complex color theory concepts to artists, instructors and students. The app extends the case studies further by allowing for you to create you own interactive color studies. Below, we’ll see how the app can extend beyond a learning experience and be used to aid in designing visualizations. First, with an approach that uses the Interaction of Color to build a color scheme for a Treemap visualization. Next, using Interaction of Color to create a color scheme for a Bubble Chart. Interaction of Color provides us with templates that allow us to layout color concepts while we are working with the computational code for a data visualization. Example 1: A complementary color scheme for a Treemap In Information Visualization, a treemap permits the display of hierarchical data by creating a set of nested rectangles. Each branch of the tree is defined with a rectangle and tiled with smaller rectangles that represent sub-branches. Color and size dimensions of rectangles are correlated with the tree structure. This allows for seeing patterns in a data set that would be challenging to find in other ways. Treemaps can handle the display of thousands of items simultaneously. We begin with a template from the Interaction of Color designed to explore graduation of color or the progression of equal steps between light and dark. Using this blank template, we can build a Cyan and Orange color scheme that allows us to simulate a tree map visualization of two variables. Using a template from the Albers App to build a Complementary Color Harmony for a Treemap Visualization. Our Cyan-Blue and Orange Color Harmony, that we created with the Albers App, is defined as a complementary Color Harmony. We can show this below by using the Adobe Capture app for the iPhone to depict our color harmony. Using the Adobe Capture App to verify we have a Cyan-Blue and Orange complementary color harmony. Our next step is to apply our Cyan-Blue and Orange complementary color harmony to visualizing a data set as a Tree Map. Tableau Public, a freely available tool for building information visualizations from data sets, lets us create our example below. In addition to color, we add text in our treemap visualization to label the observed data. Treemap Visualization with a Blue-Cyan and Orange color harmony. This three step approach allows us to experiment first with the colors, then consider them in the context of the color wheel to see if they have color harmony, and finally to apply them to our target visualization. Process for Creating our Complementary Color Harmony. Our final step is to check for color deficiency with a Color Blindness Simulator like Coblis. Here are the results for our treemap. Color Deficiency checks, performed with the Color Blindness Simulator — Coblis, for our Treemap. These Coblis results indicate that even if our audience included members with color deficiencies, they would be able to see color differences in the color scheme we have designed. Example 2: An analogous Color scheme for a Bubble Chart A bubble chart is a data visualization technique where multiple circles (bubbles) are displayed in a two-dimensional plot. It is considered a generalized version of a scatter plot where the dots are replaced with bubbles. Frequently, a bubble chart depicts the values of three numeric variables: the observed data are shown as a circles or bubbles with the horizontal and vertical positions depicting the values of two variables. To develop our bubble chart color scheme, we notice that one of the templates in Interaction of Color is composed of a series of circles. Using the blank template, we experiment with selecting a color scheme and determine how the color elements interact with varying backgrounds. Rather than a complementary scheme, we decide to explore the analogous Color Harmony principle where we select colors adjacent to each other on the Color Wheel. In this case, the analogous scheme will be Red, Blue and Purple as shown below. Using a template from the Albers App to build an Analogous Color Harmony for a Bubble Chart Visualization. Next, we further verify our analogous color by using Adobe Capture. Using the Adobe Capture App to verify we have an analogous Red, Purple and Blue color harmony. In our next step, we apply our Red, Purple and Blue analogous Color Harmony to visualizing a data set as a bubble chart. We’re again using Tableau Public to create the example below. In addition to color, we add text in our bubble chart to label the observed data. Bubble Chart Visualization with a Red, Purple and Blue analogous color harmony. The process is similar, with an added step in the Interaction of Color phase. Flow Chart of the Process for Creating our Analogous Color Harmony. Our final step is to check for color deficiency with the Color Blindness Simulator tool, Coblis. Color Deficiency checks, performed with the Color Blindness Simulator — Coblis, for our Bubble Chart Visualization. These results indicate that an individual with color deficiencies might have difficulty interpreting our visualization if we only use our color scheme. We can consider redesigning our visualization or note that we have provided a text descriptor for each of the bubbles shown in our visualization. As a result, we have a double encoded visualization that can suffice for the visual display of our data. Although this is not an optimal solution, it is an acceptable one and an approach that is used in practice frequently. Concluding Remarks There are many approaches to building color schemes for data visualizations. In addition to the examples shown here, Michael Yi provides an excellent Nightingale discussion on “How to Choose Colors for Your Data Visualization” (October 24, 2019) based on the Sequential, Diverging and Qualitative color scheme concepts developed by Cynthia Brewer for the ColorBrewer tool. For further discussion on my approaches, please see my 2016 book on “Applying Color Theory to Digital Media and Visualization” published by CRC Press. Theresa-Marie Rhyne is a Visualization Consultant with extensive experience in producing and colorizing digital media and visualizations. She has consulted with the Stanford University Visualization Group on a Color Suggestion Prototype System, the Center for Visualization at the University of California at Davis and the Scientific Computing and Imaging Institute at the University of Utah on applying color theory to Ensemble Data Visualization. Prior to her consulting work, she founded two visualization centers: (a) the United States Environmental Protection Agency’s Scientific Visualization Center and (b) the Center for Visualization and Analytics at North Carolina State University. Her book on “Applying Color Theory to Digital Media and Visualization” was published by CRC Press in November 2016.
https://medium.com/nightingale/colorizing-a-visualization-a02cbc5f91fc
['Theresa-Marie Rhyne']
2020-04-02 13:08:52.480000+00:00
['Information Visualization', 'Design', 'Colors', 'Color Theory', 'Data Visualization']
How I’m Drinking More Water By Measuring Less of It
In middle school, I was invited to a science camp at a local university. For one activity we were set up in a lab and given an apron and a set of protective goggles and gloves. Our experiment’s instructions were outlined on a lamented sheet placed at each station. The fume hoods were turned on. The atmosphere in the room was dutiful and anticipatory. We were making silly putty. To begin, I carefully measured each ingredient, added it to the bowl, and my lab partnered stirred. A liquid substance formed—nothing like the consistency of silly putty. I looked around at other stations to compare. While I was doing this, a supervisor—a college student—came over to our station, scooped up an indecipherable amount of cornstarch and plopped it in our bowl. “Excuse me, MA’AM,” I wanted to say. “This is SCIENCE.” My labmate continued to stir as I leaned forward with my hands on the table. I couldn’t even look at it. Without consulting with us, this supervisor had ruined our experiment. She completely disregarded the measurements. Within a minute though, I held the orange silly putty in my palm—it was nothing short of perfect.
https://medium.com/curious/how-im-drinking-more-water-by-measuring-less-of-it-2f42060d69a5
['Katie Martin']
2020-12-22 13:02:19.700000+00:00
['Health', 'Self Improvement', 'Self', 'Lifestyle', 'Mindfulness']
Feeble & Freed
There is power in word and image, genesis of thought. Such are these gifts – Fragments of forever found in the fleeting feeble and fickle frailty of those now Freed.
https://medium.com/prov-writers/feeble-freed-1f26fa0fa8dd
[]
2017-11-15 17:17:26.783000+00:00
['Truth', 'Response', 'Gospel', 'Poetry', 'Creativity']
Make the jump from self-taught coder pupil to employee
Hello World! The first beta version of Menternship was released today to the general public. It’s been about a month in the making, and for those who haven’t skimmed the original article, Menternship is a platform that connects professional programmers with side projects to new programmers who are looking to break into the industry. Professional programmers (mentors) get volunteer programmers working on their projects, and new programmers (interns) get more hands on experience. The beta release of Menternship requires a LinkedIn account for a couple of reasons. It’s a simple way of verifying the qualifications of the mentors, and because recruiters and human resource employees generally check LinkedIn accounts to look over connections and verify resume claims, it pushes volunteers to maintain a public profile that will help them get a job faster. In the future we may allow other sign up options. We’ve included a single type of internship to start. The requirements are 40 hours of work in two months or less, coupled with an on-boarding experience that includes at least one hour of pair programming, and excellent technical documentation for the project. For legal purposes, it’s important to note that Menternship is not an exchange of work for education. It is an opportunity to volunteer in a setting where you will be exposed to the technology stack that you are interested in becoming employed in. We call these volunteer positions on Menternship internships, but this is just site specific language. The hope from any volunteer experience is that you’ll help someone out, and have a positive experience along the way. Menternship cannot guarantee that the experience will be educational or even worthwhile, but if the experience is poor, you can always walk away. Keep in mind though that managing programming interns is not a simple task. Often times the intern is a drain on resources as they require the attention of experienced developers to bring them up to speed. Teaching interns is also a skill on its own, and many of the experienced programmers who join Menternship will be learning about managing and instructing programmers for the first time. This is a new kind of experience for everyone, so lets have a little patience when approaching Menternship from either side. There are a couple of tips though that we’ve developed to improve the experience for both mentors and interns. Setup excellent lines of communication. Email is the most primitive form of remote communication, so to get the best experience you’ll want to use a messaging service like Slack or Discord, and do lots of screen sharing and voice or video calls using services like Skype or Google Hangouts. First impressions matter, but so do first experiences looking and trying to run new code. As a project owner, you might think your application is well documented and easy to understand, but it’s best to make sure that your new interns have a positive first experience by scheduling a pair programming session to get them started. Not only will this strengthen the relationship between you and the intern, it will ensure that if a problem does occur on first startup, you’ll be able to understand why it came up and address it immediately. Ask questions, and make sure that everyone understands that asking questions is not just allowed, it’s encouraged. If programmers aren’t asking questions, that’s usually a sign of poor communication, and poor communication leads to a bad experience for everyone. So ask questions, and respond to those questions in a timely manner. Don’t forget that the ultimate goal for these interns is to get hired as a programmer, so when an internship does end, make sure that the interns are also in a better place publicly. Leave them a recommendation on LinkedIn, make suggestions about different programmers to follow through different platforms, give them projects that read well on resumes when completed, and of course, offer to act as a reference on any future job applications as needed. Menternship is in its infancy right now, and what I would call barely MVP. We’ve got a lot of features on the horizon that hopefully a couple of interns will be able to contribute to. The first feature is a rating system. The only rating system right now is the number of interns an internship has attracted, and the status of those interns. A good project will have either active or completed interns, and a bad project might have some combination of fired and will not complete interns. So that being said, there’s even more of a mutual interest in having a positive experience, as bad experiences reflect poorly on both parties. In App Notifications. Right now we’re relying entirely on email to notify members of important events, like new applications, accepted applications, and completed hours. In the near future we’d like users to be able to rely on an in-house notification system similar to LinkedIn, Stack Overflow, and so many others. Different types of internships. Right now we have a single official 40 hour internship, but in the future we’d like to introduce shorter lightning internships (a possible use case would be for experienced developers looking to gain experience with unfamiliar technologies), longer format internships, and pair internships (where internships are completed as a team of two). If you’re interested in the Menternship project, you can signup today and give this article a clap. The project is completely open source, and viewable at https://github.com/Menternship. If you want to discuss the project please join us on Slack, or send me or the official Menternship twitter account a tweet. Thanks for reading.
https://medium.com/hackernoon/interns-for-your-side-projects-beta-web-app-released-today-d8a93ceea2b3
['Leigh Silverstein']
2017-10-26 13:11:44.637000+00:00
['Programming', 'Software Development', 'Startup', 'Tech', 'Education']
Clean Architecture: Standing on the shoulders of giants
This post is part of The Software Architecture Chronicles, a series of posts about Software Architecture. In them, I write about what I’ve learned on Software Architecture, how I think of it, and how I use that knowledge. The contents of this post might make more sense if you read the previous posts in this series. Robert C. Martin (AKA Uncle Bob) published his ideas about Clean Architecture back in 2012, in a post on his blog, and lectured about it at a few conferences. The Clean Architecture leverages well-known and not so well-known concepts, rules, and patterns, explaining how to fit them together, to propose a standardised way of building applications. Standing on the shoulders of EBI, Hexagonal and Onion Architectures The core objectives behind Clean Architecture are the same as for Ports & Adapters (Hexagonal) and Onion Architectures: Independence of tools; Independence of delivery mechanisms; Testability in isolation. In the post about Clean Architecture was published, this was the diagram used to explain the global idea: Robert C. Martin 2012, The Clean Architecture As Uncle Bob himself says in his post, the diagram above is an attempt at integrating the most recent architecture ideas into a single actionable idea. Let’s compare the Clean Architecture diagram with the diagrams used to explain Hexagonal Architecture and Onion Architecture, and see where they coincide: Externalisation of tools and delivery mechanisms Hexagonal Architecture focuses on externalising the tools and the delivery mechanisms from the application, using interfaces (ports) and adapters. This is also one of the core fundaments of Onion Architecture, as we can see by its diagram, the UI, the infrastructure and the tests are all in the outermost layer of the diagram. The Clean Architecture has exactly the same characteristic, having the UI, the web, the DB, etc, in the outermost layer. In the end, all application core code is framework/library independent. Hexagonal Architecture focuses on externalising the tools and the delivery mechanisms from the application, using interfaces (ports) and adapters. This is also one of the core fundaments of Onion Architecture, as we can see by its diagram, the UI, the infrastructure and the tests are all in the outermost layer of the diagram. The Clean Architecture has exactly the same characteristic, having the UI, the web, the DB, etc, in the outermost layer. In the end, all application core code is framework/library independent. Dependencies direction In the Hexagonal Architecture, we don’t have anything explicitly telling us the direction of the dependencies. Nevertheless, we can easily infer it: The Application has a port (an interface) which must be implemented or used by an adapter. So the Adapter depends on the interface, it depends on the application which is in the centre. What is outside depends on what is inside, the direction of the dependencies is towards the centre. In the Onion Architecture diagram, we also don’t have anything explicitly telling us the dependencies direction, however, in his second post, Jeffrey Palermo states very clearly that all dependencies are toward the centre. The Clean Architecture diagram, in turn, it’s quite explicit in pointing out that the dependencies direction is towards the centre. They all introduce the Dependency Inversion Principle at the architectural level. Nothing in an inner circle can know anything at all about something in an outer circle. Furthermore, when we pass data across a boundary, it is always in the form that is most convenient for the inner circle. In the Hexagonal Architecture, we don’t have anything explicitly telling us the direction of the dependencies. Nevertheless, we can easily infer it: The Application has a port (an interface) which must be implemented or used by an adapter. So the Adapter depends on the interface, it depends on the application which is in the centre. What is outside depends on what is inside, the direction of the dependencies is towards the centre. In the Onion Architecture diagram, we also don’t have anything explicitly telling us the dependencies direction, however, in his second post, Jeffrey Palermo states very clearly that all dependencies are toward the centre. The Clean Architecture diagram, in turn, it’s quite explicit in pointing out that the dependencies direction is towards the centre. They all introduce the Dependency Inversion Principle at the architectural level. Nothing in an inner circle can know anything at all about something in an outer circle. Furthermore, when we pass data across a boundary, it is always in the form that is most convenient for the inner circle. Layers The Hexagonal Architecture diagram only shows us two layers: Inside of the application and outside of the application. The Onion Architecture, on the other hand, brings to the mix the application layers identified by DDD: Application Services holding the use case logic; Domain Services encapsulating domain logic that does not belong in Entities nor Value Objects; and the Entities, Value Objects, etc.. When compared to the Onion Architecture, the Clean Architecture maintains the Application Services layer (Use Cases) and the Entities layer but it seems to forget about the Domain Services layer. However, reading Uncle Bob post we realise that he considers an Entity not only as and Entity in the DDD sense but as any Domain object: “An entity can be an object with methods, or it can be a set of data structures and functions.“. In reality, he merged those 2 innermost layers to simplify the diagram. The Hexagonal Architecture diagram only shows us two layers: Inside of the application and outside of the application. The Onion Architecture, on the other hand, brings to the mix the application layers identified by DDD: Application Services holding the use case logic; Domain Services encapsulating domain logic that does not belong in Entities nor Value Objects; and the Entities, Value Objects, etc.. When compared to the Onion Architecture, the Clean Architecture maintains the Application Services layer (Use Cases) and the Entities layer but it seems to forget about the Domain Services layer. However, reading Uncle Bob post we realise that he considers an Entity not only as and Entity in the DDD sense but as any Domain object: “An entity can be an object with methods, or it can be a set of data structures and functions.“. In reality, he merged those 2 innermost layers to simplify the diagram. Testability in isolation In all three Architecture styles the rules they abide by provide them with insulation of the application and domain logic. This means that in all cases we can simply mock the external tools and delivery mechanisms and test the application code in insulation, without using any DB nor HTTP requests. As we can see, Clean Architecture incorporates the rules of Hexagonal Architecture and Onion Architecture. So far, the Clean Architecture does not add anything new to the equation. However, in the bottom right corner of the Clean Architecture diagram, we can see a small extra diagram… Standing on the shoulders of MVC and EBI The small extra diagram in the bottom right corner of the Clean Architecture diagram explains how the flow of control works. That small diagram does not give us much information, but the blog post explanations and the conference lectures given by Robert C. Martin expand on the subject. Robert C. Martin In the diagram above, on the left side, we have the View and the Controller of MVC. Everything inside/between the black double lines represents the Model in MVC. That Model also represents the EBI Architecture (we can clearly see the Boundaries, the Interactor and the Entities), the “Application” in Hexagonal Architecture, the “Application Core” in the Onion Architecture, and the “Entities” and “Use Cases” layers in the Clean Architecture diagram above. Following the control flow, we have an HTTP Request that reaches the Controller. The controller will then: Dismantle the Request; Create a Request Model with the relevant data; Execute a method in the Interactor (which was injected into the Controller using the Interactor’s interface, the Boundary), passing it the Request Model and the Presenter; The Interactor: 1. Uses the Entity Gateway Implementation (which was injected into the Interactor using the Entity Gateway Interface) to find the relevant Entities; 2. Orchestrates interactions between Entities; 3. Creates a Response Model with the data result of the Operation; 4. Populates the Presenter giving it the Response Model; 5. Returns the Presenter to the Controller; Uses the Presenter to generate a ViewModel; Binds the ViewModel to the View; Returns the View to the client. The only thing here where I feel some friction and do differently in my projects is the usage of the “Presenter“. I rather have the Interactor return the data in some kind of DTO, as opposed to injecting an object that gets populated with data. What I usually do is the actual MVP implementation, where the Controller has the responsibility of receiving and responding to the client. Conclusion I would not say that the Clean Architecture is revolutionary because it does not actually bring a new groundbreaking concept or pattern to the table. However, I would say that it is a work of the utmost importance: It recovers somewhat forgotten concepts, rules, and patterns; It clarifies useful and important concepts, rules and patterns; It tells us how all these concepts, rules and patterns fit together to provide us with a standardised way to build complex applications with maintainability in mind. When I think about Uncle Bob work with the Clean Architecture, It makes me think of Isaac Newton. Gravity had always been there, everybody knew that if we release an apple one meter above the ground, it will move towards the ground. The “only” thing Newton did was to publish a paper making that fact explicit*. It was a “simple” thing to do, but it allowed people to reason about it and use that concrete idea as a foundation to other ideas. In other words, I see Robert C. Martin is the Isaac Newton of software development! :D Resources 2012 — Robert C. Martin — Clean Architecture (NDC 2012) 2012 — Robert C. Martin — The Clean Architecture 2012 — Benjamin Eberlei — OOP Business Applications: Entity, Boundary, Interactor 2017 — Lieven Doclo — A couple of thoughts on Clean Architecture 2017 — Grzegorz Ziemoński — Clean Architecture Is Screaming * I know Sir Isaac Newton did more than that, but I just want to emphasize how important I consider the views of Robert C. Martin. Published September 28, 2017September 28, 2017
https://medium.com/the-software-architecture-chronicles/clean-architecture-standing-on-the-shoulders-of-giants-168e6b7b00b9
['Herberto Graça']
2017-12-18 08:01:01.967000+00:00
['Programming', 'Software Architecture', 'Software Engineering', 'Software Development']
The Mad Woman on the Fifth Floor
The Mad Woman on the Fifth Floor Agnes Follow Dec 13 · 3 min read Was she thinking “I’m a mad woman, hear me roar?” Was she pure emotion and not thinking at all? By Agnes I count one cop car, two, three. Three too many, neatly boxing in my street. Curiosity sweeps in like a breeze. Alert eyes, furtive glances, little chills. Did something happen? Is it going on still? I locate the closest officer. He’s leaning against the car, eyes forward, hand on gun. I walk up slowly and ask what’s going on. It’s 6am and it’s not quite dawn, but people are still sleeping. “Street is blocked, we are working,” he says, stating the obvious. I don’t leave and he continues “Mad woman from the fifth floor is throwing things, she smashed a few cars already.” Mad woman. There’s been a few of those, mad men too I hear. It must be the heat or the time of year. It must be the quarantine, the food, the booze, the sum of a million little things and pent-up feelings. I stare at the street, the policeman quiet beside me. They must see a lot of this. I thank him and go back to get my stuff from the car. I start to walk towards my apartment without looking up. The police officers watch me, but they never say stop so I hurry on. Quickly, quietly, eager to get out of the emotional drop zone. It’s so quiet now. I can’t help imagining the neighbors waking up to that. In the 5AM silence: the dented car roofs setting off blaring alarms, pigeons shrieking and the shrill sound of breaking glass. Could they hear the stop and start crackle of the police radios as they surveyed the scene and pinpointed the mad woman on the fifth. I glimpse the wreckage out of the corner of my eye. Broken chairs and boxes and glass. What happened this morning to make her get up like that? Is it her stuff she’s throwing or is there more to it than that? A lovers’ spat? Somebody she’s grieving? Or is it herself she’s lashing out at? It’s less than a block and I’m quickly inside, but I can’t shake the image as the elevator goes up. In a stroke of synesthesia, I see the strewn pieces of broken stuff and hear the ripping sound. The shred, the fracture, the crash. I think about her throughout the day. The mad woman from the fifth floor. I wonder if I’ve seen her before. This neighbor I’ve never met, but now know about. I’ve seen the urban carpet made of her stuff. How little we know about what goes on behind closed doors, the feelings that flicker and flame up behind the windows that blink on and off around us. Now, I look twice at the couple that holds the door as I step out, the old man from downstairs, the teenager with the dog, the hippy biker from the top floor. We all look normal, until we’re not.
https://medium.com/medusas-musings/the-mad-woman-on-the-fifth-floor-41622f98a005
[]
2020-12-13 19:14:51.733000+00:00
['Short Story', 'Neighbors', 'End Of Year', 'People']
Using language as data
Using language as data The inherent difficulty of balancing algorithms and design Language is always situated, i.e., it is uttered in a specific situation at a particular place and time, and by an individual speaker with all the characteristics outlined above. All of these factors can therefore leave an imprint on the utterance. [1] I have a bit of an obsession with language and communication that is difficult to summarize with a pithy list, but I will try: At any given time, I am reading between 5 and 25 books, and I am learning between 1 and 4 languages, with varying degrees of success; I am growing a note of phrases I find pleasing (it includes gems such as ‘tumble and lollop,’ ‘the nudge of loveliness,’ and ‘tech-savvy pseudo-hippies’); when I rebelled against my chosen field of data science, it wasn’t to backpack across Europe, but to be a freelance writer and editor, and when I returned to said field, in an attempt to tame my unwieldy interests, I began studying natural language processing with a passion that could only be contained by the responsibilities associated with being the working mother of an infant son; I enjoy complex sentences wrangled by thoughtful punctuation and the slant-usage of words; and while I don’t write anymore, not really — just the spare article here and there — I still dabble in what I refer to as “abstract poetry” (but don’t hold your breath waiting for the publication of my e-chapbook — I have chosen instead to have ribbon-bound pages found or forever lost after my death). So when I (virtually) attended Future Data 2020, a conference about the next generation of data systems, I was more than a little excited about the talk given by Marti Hearst, Professor at the University of California, Berkeley, entitled the Intersection of Language, Algorithms, and Design (available here). Language, algorithms, and design Three seemingly disparate words, but each describes a human attempt to organize and ascribe meaning to an infinitely complex world. To consider these topics together, as Professor Hearst does in her talk, is to explore human cognition, with algorithms as logical processes that can be translated into computer code to process inputs and produce outputs, language as a verbal representation used for inter-human communication of facts, falsehoods, fantasies, and figments, and design as a visual alternative to algorithmic and textual representations that can facilitate (or, if wielded improperly, muddle) human understanding. Professor Hearst’s talk more deeply discusses the three topics than I will here, as I want to focus on these topics with language as the fulcrum — with algorithms used to process language data and design used to convey the results of an analysis. But before I talk about the use of language as data, I want to talk about the use of quantitative data to serve as a comparison. Quantitative data Humans create all data; however, certain characteristics of a research object (herein defined as any item or group of items, abstract or concrete, that is analyzed to better understand a phenomenon) are more easily transformed into data than others. For example, the length, width, height, etc. of an object can be recorded as numerical data by using a standard unit of length, such as the meter, as a basis. While a meter may have different implications for different applications, it is always the same unit of length, regardless of the object being measured, the instrument used for measurement, and the person collecting the data. Of course, all measurements are subject to irreducible errors and measurement biases, which propagate throughout the algorithm used for analysis, but these errors can be acknowledged via statistical methods. Therefore, when employing a standard basis of measurement such as the meter, the analysis of quantitative data is fairly simple. Sure, depending on the data available, the algorithm may require concepts from linear algebra, calculus, numerical methods, and magic, all of which can get out of control if left unattended. But even so, when the factors under consideration can be easily represented as numerical data with known units, the analyst can focus on choices related to algorithm and design, rather than the standardization of the data. This aspect of quantitative data, i.e., the basis, may seem somewhat arbitrary, but let us now consider qualitative data. Qualitative data If quantitative data are dependent on the use of a number system, then analogously, qualitative data are dependent on the use of a language system. However, there are several reasons why a language system does not facilitate analysis as well as a numerical system. First, because the number system is based on an ordered relationship among members, certain mathematical properties hold true over the entire system, and these properties hold true for any subset of the system. In comparison, because there is no ordered relationship among the words of a language, a language system does not have simple, universal properties comparable to those of a numerical system. There are certain tendencies to language, such as a verb following a noun in English, but there are many cases that do not exhibit such tendencies. For this reason, language does not easily lend itself to datafication, and so to generate analyzable data from language, many algorithms rely on counts and probabilities. In addition, although the number system includes an infinite number of numbers, it is possible to imagine the full breadth of this system as a one-dimensional line that extends continuously from the largest possible negative number to the largest possible positive number. With a language system, while certain concepts can be considered to exist along a continuum, each word has a discreet definition, and there are discontinuities that exist, even between synonyms. Furthermore, while a number has a precise, static meaning, a word offers an imprecise description that is dependent on context and the vocabulary of the speaker. As mentioned, all quantitative data are subject to errors related to the measurement instrument (instrument limitations) and the person collecting the data (personal limitations). However, with language, the parallels for these errors can lead to complications that cannot be easily addressed with statistical methods. For example, a language may lack a word for a certain phenomenon, and such a lack can be considered as an instrument limitation. Moreover, a text is limited by the writer’s vocabulary (a personal limitation), the words of which may be biased by connotation or subject to malapropism. Clearly, qualitative data is not as straightforward to work with as quantitative data, and therefore, when working with text, some decisions must be made regarding the datafication of the text in relation to the research question. Words in bags and clouds Whenever I mention my interest in text to someone who works in or around data, their eyes light up, as the potential of such a rich source of data is undeniable. I think the enthusiasm for such a complex data source is why word clouds and the bag of words algorithm, both of which rely on word counts and thus the disruption of the order of a text, are so popular. As mentioned, in many cases, text corpora do not allow for straightforward analyses or interpretations, and for this reason, it is imperative to keep in mind the following question asked by Professor Hearst in her talk: How often do we have the designs we want versus those our algorithms can (easily) make? In terms of algorithm and design, with regard to text data, a bag of words and word clouds are low-hanging fruit. Such methods, of course, are not without merit, but their indiscriminate application can be criticized as lacking consideration of the purpose they are to serve. Further, it is important to keep in mind that, because language exists with linearity, it is difficult to get to the core of what a text says by disrupting that linearity. For this reason, probabilistic language models, such as n-gram models, are often applied for predictive purposes. However, these methods, too, are limited, as they are highly dependent on the corpora used to create them and they do not inherently consider the flexibility of language. But my goal here is neither to convince anyone to avoid certain methods nor to suggest methods for improving the algorithmic and design methodologies commonly used for text analysis. Instead, I am suggesting that text data receive due respect, as all data deserve, during the development of text analytics projects. What I say are words and meaning, urgency and emotion, the culmination of each day lived through, each sentence read, and everything I do not or cannot understand — no less. So if I use a word, the next can be described by a Markovian process, yes, but it is not independent of myself, my audience, or my mood. Lacking a consistent basis of measurement, context is key. The importance of text analysis This point is the subject of much debate, but it seems as if human beings primarily differ from other animals in their ability to cooperate, which is greatly aided by their ability to communicate abstract ideas. However, while spoken language is thought to have come into existence around the same time as modern homo sapiens, written language was not invented until long after, and so it can be argued (and it is at some length by Steven Pinker in How the Mind Works) that the human brain did not evolve to read. But still, humans have a certain fondness for their complex patterns of strange symbols, so much so that there is something inviting and communal about a written message — whether that message is etched in stone, backlit sans serif on a screen, or scrawled quickly on a piece of scrap paper in the quintessential cursive of your late mother whom you miss — and this inexplicable something is so innate and regardless of the conveyed belief that we tend to let our sameness as humans fade into the background, overtaken by perceived differences. So while one can agree or disagree with a message or misunderstand its intended meaning, still language serves as the shared basis on which we agree or disagree, and that is lovely, to be given the opportunity to express oneself and to understand another. I think that is why people are drawn to word clouds, sort of moth-to-flame-like, even if they do not make it clear what individual words represent, separated from their parent texts as they are. We know that words are ours, the collective invention of mankind and our common heritage, that they serve a purpose and let us into the secret world of another’s mind; so we pay attention, each of us still the child reading each word on each sign once learning how. There is incredible potential to use text to reach a better understanding of humanity. And while, as a data scientist, I believe in the incredible power of information to solve problems, as a linguaphile, I believe that language, when treated as data, should be handled with great care and consideration. To analyze a text corpus is to try to infer the aggregate thoughts, beliefs, and feelings of human beings, and to do so, the whole cannot be simply represented as the sum of its parts.
https://medium.com/linguaphile/using-language-as-data-a2223b937e5a
['Danielle Boccelli']
2020-12-01 15:50:34.109000+00:00
['Data Science', 'Data Analytics', 'Language', 'Writing', 'Data']
Seven steps to decide if AI suits your business workflow
By using the right AI technology for your company, you can accelerate your growth. However, business leaders should not forcefully include AI in their operations; instead, they should find specific workflows in which AI can provide maximum value. For example, if you’re in a restaurant company, you might want to use AI to produce weekly analytics by processing electronic bills. Many executives have a trust issue with up-and-coming technologies like AI about how they fit into their business ecosystem. If you are interested in adopting AI but are not sure whether or not it can fit into your business, below are steps to help kick-start the process of using it in your workflow. Identify the areas where AI can provide maximum value for your business. Business is a complex set of intertwined processes that run like a well-oiled machine, so integrating new technology into an existing workflow is not straightforward. Implementing AI comes at a cost and, therefore, value analysis is essential to ensure that your investment gives you maximum returns. First of all, understand the need for AI in your business. For example, Domino uses AI as part of its infrastructure to boost its pizza delivery. Real Talk with Data Scientist and Advanced Analytics Manager | skills required for a data scientist To find out whether your company will benefit from AI, ask yourself a few simple questions: • What is the size of your company digitized? Company digitization requires the transfer of a raw source of data to a digital format. This can include pictures, numbers, etc. Digitization can help a company to scale. For example, if a shopkeeper sells clothing locally, the reach of the business is limited to a specific demographic. On the opposite, if the shopkeeper wanted to digitize by creating a website, the company could reach out to more customers, sell more clothes, and grow. One example of the potential benefits of digitization is Aim, which saw its stock increase by 66 percent over its 8-year digital transformation initiative. Digitization brings monetary benefits and helps to track various critical areas of business in order to reduce risks, such as accounting, financial management, inventory management, etc. One will need enormous human capital to carry out all these activities. AI is efficient when we give it the right amount of digitized data. AI‘s promise lies in discovering secret patterns of data that are not visible to a human being. Digitized data, therefore, plays a vital role in transforming AI sector. If your organization lacks digital data generation, AI may not be needed in the first place. • What are the different forms of digital data that your company collects, and how do you store it? If AI is a vehicle, then the fuel is the digital data. • Would AI provides a better return over time for the investment of time and money required? Speak to your engineers to see how AI can solve your business problems Unplash Engineers bring a different perspective to your business challenges and can help with valuable suggestions. Consult your in-house engineers to understand the scope of the particular problem and the timeframe to solve it. It is crucial to take advice before any commitments are made because they know the depth of the problem. They can also help you narrow the reach of AI in your company and help you start the creation process. Determine the implications of AI for your revenue model. Revenue is a key indicator of growth for any company. The defined use of AI cases and their expectations should be evaluated in order to ensure that they do not limit long-term development. Understand the cost implications of AI for your business. The AI is still in a growing phase. We see major progress and successful research in AI every year. However, it would be naive to regard it as a low-cost proposition. The cost implications depending on your use case, but they won’t come at a low price. The internet, for example, was costly during its development process. If your company can afford an AI-based solution that can offer a decent value proposition and improve performance, you can opt for it. Find out how AI is going to help the workers. Employees are vital to the success of every organization. AI will simplify workflows, enhance decision-making, and create knowledge that can allow workers to concentrate on more challenging tasks. For example, in a call center, workers can benefit from AI handling simple language queries without human interference. It can also help to identify and avoid spam calls. Chatbots are another great example that has become the standard in the service industry these days. There are countless possibilities where AI can support and not replace the workforce. Real Talk with Data Scientist and Reinforcement Learning Lead | skills required for data scientist Consider the legal and ethical implications of the implementation of the AI. Keep in mind that AI is a new technology. It is evolving at a rapid pace and can raise some unforeseen challenges and ethical questions. There is an ongoing discussion on the ethics of AI and the extent to which it should be regulated. You need to make sure that your business is not affected by such external forces. Enhance your learning of AI. A common misapprehension of AI among business owners is that AI is capable of solving any problem with little or no human interference. The present state of AI is nowhere near general intelligence. To realize the true value proposition of AI in your company, keep up to date with current AI advancements. Prerequisite for Data Science: It’s Not What You Think It Is | 4000° PLASMA LIGHTSABER BUILD Initially, spend as little as possible and try to get a working prototype ready. Set short-term targets and assess progress when evaluating an AI solution. It is important to increase your understanding of AI through this process and to have trust once you know its challenges. If you want to accelerate your business growth with AI, you should be willing to learn and adapt quickly to changes. My advice to you is to be open-minded and think outside of the box while you are looking for a career in data science. It will give you a competitive edge in your career in data science. Bio: Shaik Sameeruddin I help businesses drive growth using Analytics & Data Science | Public speaker | Uplifting students in the field of tech and personal growth | Pursuing b-tech 3rd year in Computer Science and Engineering(Specialisation in Data Analytics) from “VELLORE INSTITUTE OF TECHNOLOGY(V.I.T)” Career Guide and roadmap for Data Science and Artificial Intelligence &and National & International Internship’s, please refer : More articles for your data science journey:
https://shaiksameeruddin.medium.com/seven-steps-to-decide-if-ai-suits-your-business-workflow-6b8518969303
['Shaik Sameeruddin']
2020-10-20 12:41:27.686000+00:00
['Artificial Intelligence', 'Data Science', 'Technology', 'Data Visualization', 'Machine Learning']
Chicken Dinner- No Longer a Winner
The discovery of antibiotics has historically been viewed as one of the most impactful breakthroughs of early modern medicine but the current rate at which we are exposed to them is drastically altering what we know about the micro biotic state of our world. In recent years medical use is not the only form of antibiotics we ingest; the food we eat also harbours large quantities of these drugs, especially poultry. By looking at the work of two different writers we can see how this problematic topic is relevant and how the opposing opinions that are presented ultimately come to the same conclusion- the antibiotics that our food is being raised on is a major problem. In an article written by Sarah Chapman she discusses the problematic effects that plague our population and environment through the use of an estimated 1.6 million kilograms of antibiotics used on livestock per year. In contrast, Maryn McKenna has written an article highlighting the fact that some markets of the poultry industry cannot sustain a complete withdraw of antibiotics from the production of their livestock and continue to meet the demand of their consumers. Although Chapman and McKenna offer opposing perspectives, both authors are successfully convincing of their views through the use of logical appeal in their writing. Chapman effectively uses cause and effect whereas McKenna successfully demonstrates personal experience to solidify their arguments. The Alteration of Our Micro Biotic Environment In Chapman’s article Playing Chicken she effectively illustrates the negative impact that raising poultry with antibiotics has on the micro biotic health of not only our own bodies but the health of generations to come. As stated in the opening of her article, humans are made up of and coexist with bacteria on a daily basis and have done so for millions of years, our two species are intertwined from a microbiological perspective. Our symbiotic relationship became harmful only when we tactically started eradicating these micro lifeforms in both our own bodies and the bodies of our food, as seen through the discovery and rise of occurrences of Super Bugs. This only being one of the examples Chapman gives to prove that ultimately what happens on the farm will affect what happens at the dinner table. Throughout her article, Chapman reinstates how our relenting assault on bacteria through the overuse of antibiotics will have unknown and potentially harmful repercussions. “Many of the worst disease outbreaks in history, the Black Plague, H1N1, and Ebola were transmitted to humans either directly or through our shared environment with animals” Photo by Alison Marras on Unsplash Chapman’s Use of Cause and Effect I completely agree with the opinion presented by Chapman that the alteration of the micro biotic environment in which we exist will have drastic consequences in the not so distant future. Her use of factual evidence to show the cause and effect of antibiotics can be seen various time in her articles. One example, which I have personal experience with, is the development of Super Bugs. As stated by Sir William Olser: “ills which flesh is heir to’ are not wholly monopolized by the ‘lords of creation” Many of the worst disease outbreaks in history, the Black Plague, H1N1, and Ebola were transmitted to humans either directly or through our shared environment with animals, with the newest of these global crises being Super Bugs. Like all living organisms, bacteria work to preserve their species, and although we tried eliminating certain strains of bacteria from the environment, we only made the remaining strains unresponsive to our only defence against them, antibiotics. I personally have experienced this phenomenon as the antibiotics I was prescribed for bacterial phenomena ceased working, the result being I became sceptic. Although the exact reason is unknown, one of the reasons the doctors gave me was that the strain of bacteria I was infected with had developed a resistance to the antibiotics that the chicken it once lived in was being raised on, which resulted in a mutation that was resistant to the antibiotics to fight the phenomena. The alteration of the micro biotic environment that we live in has drastic effects on our health, as I can personally attest to. The Dependance on Antibiotics In McKenna’s article In India a Better Economy Means More Chickens- and Loads More Antibiotics she highlights the fact that some markets cannot withstand the complete withdraw of antibiotics from mass farming if they are to continue supporting the demand of the product. Between 2004 and 2010, chicken consumption doubled in India, going from one-fourth to one-half of the meat market. This intense growth of an industry requires intensive agriculture measures to be taken to ensure adequate production rates, and in this case that needed to be antibiotics. Not only are the drugs being used for disease prevention but also for the promotion of growth, a practice which has been banned in the U.S.A and Canada, at an estimate use of 63,151 tons, twice the amount of antibiotics being used by humans. A complete withdraw of antibiotics would not only cripple the poultry industry but the countries economy as half of the animal protein market is controlled by chicken, which are being raised on antibiotics. “The antibiotics were pretty much all that was keeping them alive” Photo by Lesly Juarez on Unsplash McKenna’s Use of Personal Experience Although I do not agree with the magnitude at which the antibiotics are being used, McKenna’s personal experience helps in convincing me that there is no other feasible option for the poultry industry at this point. McKenna states “Chickens were dying at the rate of 1 percent a day. The antibiotics were pretty much all that was keeping them alive”. This shows that India is a perfect example of a market that would not cease the use of antibiotics on their chickens because they would not be able to feed the population that is demanding poultry. The popularity of chicken in India is clear to see; it’s affordable and carries none of the political complexity of other animal proteins like beef because of this there is no consumer pressure to move away from or restrict the use of antibiotics. Major companies have no incentive to change their relationship with these drugs as the public is not bringing up any concerns, the government has no regulations on its use, and this increases the company’s productivity. Through firsthand accounts like this one you get a better understanding of the complex situation that is the poultry industry in India and why they are so dependent on the use of antibiotics to sustain the industry. Concluding Thoughts Although through different perspectives both Chapman and McKenna successfully convince the reader of their opinion through logical appeal. Chapman’s use of cause and effect highlight the factual evidence of antibiotic use on poultry farms, while McKenna’s use of personal experience allows the reader to fully understand the complexity of the situation in the Indian poultry market.
https://medium.com/gbc-college-english-lemonade/chicken-dinner-no-longer-a-winner-7ac408ded5f3
['Katie Beaton']
2019-03-10 18:27:14.102000+00:00
['Health']
10 Smooth Python Tricks For Python Gods
№1: Reverse A String Though it might seem rather basic, reversing a string with char looping can be rather tedious and annoying. Fortunately, Python includes an easy built-in operation to perform exactly this task. To do this, we simply access the indice ::-1 on our string. a = "!dlrow olleH" backward = a[::-1] №2: Dims as variables In most languages, in order to get an array into a set of variables we would need to either loop through the values iteratively or access the dims by position like so: firstdim = array[1] In Python, however, there is a way cooler and quicker way to do so. In order to change a list of values into variables we can simply set variable names equal to the array with the same length of the array: array = [5, 10, 15, 20] five, ten, fift, twent = array №3: Itertools If you’re going to spend any time whatsoever in Python, you will definitely want to get familiar with itertools. Itertools is a module within the standard library that will allow you to get around iteration constantly. Not only does it make it far easier to code complex loops, it also makes your code both faster and more concise. Here is just one example of a use for Itertools, but there are hundreds: c = [[1, 2], [3, 4], [5, 6]] # Let's convert this matrix to a 1 dimensional list. import itertools as it newlist = list(it.chain.from_iterable(c)) №4: Intelligent Unpacking Unpacking values iteratively can be rather intensive and time consuming. Fortunately, Python has several cool ways in which we can unpack lists! One example of this is the *, which will fill in unassigned values and add them to a new list under our variable name. a, *b, c = [1, 2, 3, 4, 5] №5: Enumerate If you’re not aware of enumerate, you probably should get familiar with it. Enumerate will allow you to get indexes of certain values in a list. This is especially useful in data science when working with arrays rather than data-frames. for i,w in enumerate(array): print(i,w) №6: Name Slices Slicing apart lists in Python is incredibly easy! There are all sorts of great tools that can be used for this, but one that certainly is valuable is the ability to name slices of your list. This is especially useful for linear algebra in Python. a = [0, 1, 2, 3, 4, 5] LASTTHREE = slice(-3, None) slice(-3, None, None) print(a[LASTTHREE]) №7: Group Adjacent Lists Grouping adjacent loops could certainly be done rather easily in a for loop, especially by using zip(), but this is certainly not the best way of doing things. To make things a bit easier and faster, we can write a lambda expression with zip that will group our adjacent lists like so: a = [1, 2, 3, 4, 5, 6] group_adjacent = lambda a, k: zip(*([iter(a)] * k)) group_adjacent(a, 3) [(1, 2, 3), (4, 5, 6)] group_adjacent(a, 2) [(1, 2), (3, 4), (5, 6)] group_adjacent(a, 1) №8: next() iteration for generators In most normal scenarios in programming, we can access an indice and get our position number by using a counter, which will just be a value that is added to: array1 = [5, 10, 15, 20] array2 = (x ** 2 for x in range(10)) counter = 0 for i in array1: # This code wouldn't work because 'i' is not in array2. # i = array2[i] i = array2[counter] # ^^^ This code would because we are accessing the position of i Instead of this, however, we can use next(). Next takes an iterator that will store our current position in memory and will iterate across our list in the background. g = (x ** 2 for x in range(10)) print(next(g)) print(next(g)) №9: Counter Another great module from the standard library is collections, and what I would like to introduce to you today is Counter from collections. Using Counter, we can easily get counts of a list. This is useful for getting the total number of values in our data, getting a null count of our data, and seeing the unique values of our data. I know what you’re thinking, “ Why not just use Pandas?” And this is certainly a valid point. However, using Pandas for this is certainly going to be a lot harder to automate, and is just another dependency you are going to need to add to your virtual environment whenever you deploy your algorithm. Additionally, a counter type in Python has a lot of features that Pandas Series don’t have, which can make it far more useful for certain situations. A = collections.Counter([1, 1, 2, 2, 3, 3, 3, 3, 4, 5, 6, 7]) A Counter({3: 4, 1: 2, 2: 2, 4: 1, 5: 1, 6: 1, 7: 1}) A.most_common(1) [(3, 4)] A.most_common(3) [(3, 4), (1, 2), (2, 2)] №10: Dequeue Another great thing coming out of the collections module is dequeue. Check out all the neat things we can do with this type!
https://towardsdatascience.com/10-smooth-python-tricks-for-python-gods-2e4f6180e5e3
['Emmett Boudreau']
2020-06-02 07:44:28.954000+00:00
['Python', 'Data Science', 'Algorithms', 'Computer Science', 'Programming']
Mundane mist
Yesterday holds what was once missed within the cold arms of a stranger remembering every mundane mist consented safety smells like danger As seasons of the moon elapse truth invites a shiny mystery where ancient wolves devour maps to limitless mirrors of history Words fight to emerge unsullied from an ocean tainted by giants afraid to let shadows exceed the piercing weight of their talents Scraps of time strip hope of expectations when verbal habits die in empty conversations
https://nunoricardopoetry.medium.com/mundane-mist-32ac992eba54
['Nuno Ricardo']
2020-10-04 14:28:16.868000+00:00
['Self', 'Social Media', 'Poetry', 'Society', 'Sonnet']
How to tell if your database is controlling your application
An ever-growing list of anti-patterns and symptoms, in no particular order. I think about this mostly from the SELECT-side, so I’m sure there’s a fair amount missing on the INSERT/UPDATE-side, and also from the NoSQL perspective.
https://towardsdatascience.com/how-to-tell-if-your-database-is-controlling-your-application-256d697ce0c1
['Jesse Paquette']
2020-06-24 15:16:31.218000+00:00
['Database', 'Software Development', 'Sql', 'Software Engineering', 'Data Science']
Choosing the Right Software Architecture With Non-Functional Requirements Analysis
Software Architecture How to Pick the Right Software Architecture for a Product From the Start An approach to designing future-proof software architectures using non-functional requirements analysis and product quality attributes. Photo by Annie Spratt on Unsplash When building a new software product, it is crucial to pick the optimal architecture as early as possible. The right architecture supports the engineering team in building the product. Poor architecture choices hinder development and may lead to expensive rework in the future. In this article I’d like to share an approach to designing a future-proof software architecture that has helped me to come up with simpler designs, ship new functionality faster, and avoid big architecture changes as the products evolved. What I particularly like about that approach is that it scales up and down depending on the project size (product, service, or just a large feature), and that it can be as rigorous or as informal as appropriate for the engineering culture in an organisation. Let’s briefly look at two ways how an engineering team may come up with a software architecture for their product: ad hoc vs planned and then talk about the approach itself and look at an example how it can be used. Ad Hoc Architecture vs Planned Architecture Ad hoc architecture emerges when the dev team just hacks ahead and developers are building the product according to their own assumptions, opinions, and preferences without much of a plan. This is alright for small projects with a short life and no plans for further evolution, such as marketing campaign websites. Planned architecture is the opposite. It is the product of the design process ran by the engineering team. Ad hoc vs planned architecture: bigger and more complex products and services require planned architecture. Planned architecture can be driven: either by subjective factors —personal experience, preferences and assumptions of the engineers in the team; or by objective factors — the organisation goals and the conditions of the environment where the product is going to be used. Subjective and objective factors that influence software architecture design for a product or service. The issue with the former case is that it doesn't guarantee that the team would come up with the architecture optimised for achieving the actual organisation goals. The reason is that developers would be proposing technologies and design patterns based on their opinions but there won’t be a shared set of criteria for them to evaluate available options and pick the most appropriate ones. On the contrary, the latter case encourages the team to first agree on a shared set of criteria derived from the analysis of the organisation goals and the environment conditions. In addition, it reduces the impact of subjective factors on the outcome and encourages engineers to go beyond their current experience and knowledge to look for the optimal technical solutions. As a result, the team increases their chance of designing the architecture optimised for achieving the organisation goals and reduces their chance of having to do expensive architecture changes in the future. An Approach to Architecture Design Based on Non-Functional Requirements Analysis The approach to architecture design that has been working well for me consists of three steps: Identification and analysis of the non-functional requirements for the product. Selection of the relevant software quality attributes for the product and setting their targets. Selecting technologies and design patterns that would meet the targets for the relevant quality attributes to satisfy the non-functional requirements. Having identified non-functional requirements for the product or service, engineers can select relevant quality attributes, set their targets, and design the software architecture to achieve them. For those who are not familiar with the terminology, non-functional requirements are the criteria for evaluating how a software system should perform rather than what it should do. An example would be a requirement for a web API endpoint response time to be under 200ms. When we say that a software product should be “secure”, “highly-available”, “portable”, “scalable” and so on, we are talking about its quality attributes. In other words, a software product must have certain quality attributes to meet certain non-functional requirements. Step 1. Identification and Analysis of Non-Functional Requirements Product managers and business analysts rarely can provide engineers with a comprehensive list of non-functional requirements: it is difficult for many people to uncover their implicit expectations and assumptions. In addition, some non-functional requirements are related to purely technical aspects of the product and the Product team may be completely unaware of them. As a result, the engineering team has to lead this process themselves. Working with Product, Design and possibly Marketing, Support, Analytics, Legal and other departments, engineers need to walk through what the new product is meant to be doing to identify expectations, assumptions and requirements to how the new product should perform its functions. Please note, that it doesn’t have to be a series of formal meetings. Sometimes a couple of quick chats with representatives of those departments is enough to clarify the details and move to the next stage. Let’s look at an example where a team is tasked with developing a mobile app for streaming video content. This leaves a lot of room for interpretation. After a conversation with the Product team and asking them questions like: How many customers would you expect in 6-12 months after launch? How many customers do you think will be using this at the same time? How long the videos are going to be? How bad that would be if the product would be down for an hour? What market are you planning to launch it in the first 12 months? that may be expanded into something like “in 12 months after launch the product must scalable enough to stream video content to 0.5–1M concurrent users 24/7 all over the world”. That would encourage developers to consider only those design patterns and technologies that allow to develop a highly-scalable, highly-available, and fault-tolerant architecture. Furthermore, Marketing may add to that that the product actually needs to be localisable since it is going to be available world-wide. In addition to that, Legal team may require the product to be compliant with the laws of various jurisdictions around the world regarding what video content may and may not be available to various age groups. As the outcome of this step, we have identified five quality attributes potentially relevant for the architecture of the product: scalability, availability, fault-tolerance, localisability, and compliance. As mentioned above, there is no need to clarify every possible aspect of the future product behaviour at this stage. Engineers should be pragmatic: as soon as the essential non-functional requirements have been identified and analysed and the team has reached the point of diminishing returns, it may be the time to move to the next step. Step 2. Selection of Relevant Quality Attributes Having the list of quality attributes from the previous step, developers can select which ones the product architecture should be optimised for and define their targets. Developers may also want to know what quality attributes their product architecture doesn’t have to be optimised for. Then they would know which attributes to sacrifice to meet the important requirements. Candidates for such irrelevant attributes either have no related non-functional requirements or have ones that are easy to meet. For the example above, the engineering team may decide to optimise for: scalability, so the product could handle 0.5–1M concurrent users, availability, so it could have, let’s say, 99.995% uptime, localisability, so its content, support and marketing materials could be available in the most popular 10 (20, 30, etc.) languages in the world. As for the remaining two quality attributes, the team may decide that there is no need to optimise the architecture for them: fault-tolerance, may be achieved together with availability, compliance, as content restrictions per geographic location don’t have significant impact on technologies and architecture patterns that are going to be used in the product. It is convenient for developers to have a comprehensive list of possible quality attributes to go through and check whether each attribute has any related non-functional requirements for the product. Having such a list greatly simplifies steps 1 and 2. I use the list of 31 quality attributes grouped into eight characteristics that are defined in the international standard ISO/IEC 25010 in the section that introduces a software product quality model: functional suitability : functional completeness, functional correctness, functional appropriateness; : functional completeness, functional correctness, functional appropriateness; performance efficiency : time behaviour, resource utilisation, capacity; : time behaviour, resource utilisation, capacity; reliability : maturity, availability, fault tolerance, recoverability; : maturity, availability, fault tolerance, recoverability; usability : appropriateness, recognisability, learnability, operability, user error protection, user interface aesthetics, accessibility; : appropriateness, recognisability, learnability, operability, user error protection, user interface aesthetics, accessibility; security : confidentiality, integrity, non-repudiation, accountability, authenticity; : confidentiality, integrity, non-repudiation, accountability, authenticity; compatibility : co-existence, interoperability; : co-existence, interoperability; maintainability : modularity, reusability, analysability, modifiability, testability; : modularity, reusability, analysability, modifiability, testability; portability: adaptability, installability, replaceability. You can find their definitions either in the free fragment of the official standard, or here. Even though they sound quite abstract, in practice experienced developers quickly pick up what they mean and start using them. There are other quality attributes. This Wikipedia article has a few examples. Also, some projects, organisations or business domains may need unique quality attributes. As in the previous step, there is no need to capture every possible quality attribute. Only the relevant ones matter. Four to seven relevant quality attributes may be sufficient to efficiently generate and evaluate design options. Step 3. Making Architecture Decisions Having the list of relevant quality attributes and their targets, the engineering team can start generating architecture options. Let’s take a look at some obvious high-level architecture options that may be proposed for the three quality attributes identified in the example above. Scalability: the product can use a CDN to deliver at least static content faster to its users; the videos can be stored in a cloud storage possibly in multiple regions globally to make streaming faster; application servers also may need to be deployed to multiple regions; the databases should to be highly scalable and have to support replication to other regions to avoid high latency when called from the app servers located on the opposite side of the world; part of the backend functionality may be implemented with cloud functions. Availability: the product may need to be deployed to several availability zones within each region; deployment strategies should allow deployment without downtime; each region may have to have the copy of all data to be able to run independently. Localisability: the product must support integration with multiple payment methods popular in different countries; it must be integrated with a system for managing UI translations; each content item needs to support multiple localised versions; the product needs to reliably identify the location of a user to know which content is and is not available there. Having identified the available options, the team can finally select those ones that should achieve the targets for the relevant quality attributes in the most efficient way. Even though, that doesn’t guarantee that the product architecture won’t have to change in the future, that still reduces the chance that the changes will be significant and gives the engineers enough information to confidently move forward.
https://medium.com/swlh/how-to-use-non-functional-requirements-analysis-to-choose-the-right-software-architecture-eab7225ba8d2
['Andrei Gridnev']
2020-11-24 12:27:54.724000+00:00
['Software Architecture', 'Software Engineering', 'Software Development', 'Engineering Mangement', 'Product Development']
Will Humanity Ever Learn?
Will Humanity Ever Learn? Actions speak louder than words — or so they say. Photo by Elijah O’Donnell on Unsplash We’ve all heard the old saying: “the definition of insanity is doing the same thing over and over again and expecting different results.” The quote has been attributed to a number of different sources over the years — most notably Albert Einstein. Regardless of its origin, it refers to the idea that if you repeat the same action, you will never get different results. But does the same logic apply to the concept of inaction? I don’t really watch the news anymore. Not because I want to remain ignorant of current events, but because I’m fed up with hearing about the horrifying things that are happening all over the world. I can’t remember the last time I opened a newspaper and was greeted by something positive. Can you? A Monument To Hypocrisy — But Who Gives A Dame? In April, the world watched as Paris’ Notre-Dame Cathedral burned. A nation mourned, but within days the French elite had heard the call and answered by pledging hundreds of millions of dollars to restore the landmark to its former glory. This week, the world watches as the Amazon rainforest burns. Despite the fact that we lose an area of rainforest the size of a football pitch every day, despite the research showing the important role the rainforest plays in combating climate change, I don’t hear anyone pledging millions to prevent further destruction. I don’t hear anything. Anything except the flickering of the flames. Mass Shootings & Empty Promises Three weeks ago, the US was a victim of two deadly mass-shootings within 24 hours. Nine people were killed in a shooting in Dayton, Ohio, while another twenty-two were killed in El Paso, Texas. Did you know, that there were 216 days between January 1st and August 5th, 2019? And in that time there have been 255 mass shootings recorded in the United States, resulting in almost 9,000 deaths? An average of more than one mass shooting incident and around 45 deaths per day so far this year. In the first week or so afterward, you couldn’t read anything online without seeing some mention of necessary reforms to gun control. Medium itself was inundated with posts discussing the need for background checks for gun purchases, or calls for a ban on the selling of automatic rifles. Unless of course, you have feral hogs to contend with on a daily basis. But in the few weeks since? Nothing. Tragedies strike and nations will mourn. Politicians make empty promises and cast blame elsewhere. After all, it’s clearly an issue of mental illness, isn’t that right Mr. Trump? It has nothing to do with the fact that in most US states you’re legally able to buy an assault rifle before you can buy a beer. And once the families have buried their loved ones, and the echoes of gunfire have faded, the world will forget once again. I’m not saying that everyone will forget what happened. The families of those that were killed will never forget the tragedy that took their loved ones away. But the people who have a voice in the world, the leaders, politicians, people with the power to invoke change? They will forget. They will bury their heads in the sand. Until next time. An Uncertain Future In my first article on Medium, I discuss my thoughts around the morality of bringing children into the world. I typed those words with a heavy heart, as a father of two children who have brought immeasurable joy into my life. But the joy they have given me has come at a cost. And I have repaid them by bringing them into a world full of uncertainty, hatred, and destruction. Climate change and overpopulation are the primary issues I discuss in that piece, but you know as well as I do that the list goes on and on. I worry about the world my children will grow up in. Not because I think there is no hope for the planet, I know there is. But we’re not doing anything about it. Or at least, we’re not doing anything that truly makes a difference. Our world is plagued with problems. We have the power to do something about them if only the people that lead us were to do something more than cast blame and make false promises. The problems we face extend far beyond forest fires and mass shootings. They say that actions speak louder than words. But I believe inaction speaks loudest of all.
https://medium.com/1-one-infinity/will-humanity-ever-learn-856aa57cbdfa
['Jon Peters']
2019-08-25 21:56:28.102000+00:00
['People', 'Life', 'Humanity', 'Self Improvement', 'Life Lessons']
Facebook Inverse Cooking Algorithm
Facebook Inverse Cooking Algorithm Predicting a full recipe from an image better than humans Figure 1 Predicted ingredients after running Inverse Cooking algorithm in a meal of sushi [3] This recipe retrieval algorithm was developed by Facebook AI Research and it is able to predict ingredients, cooking instructions and a title for a recipe, directly from an image (Figure 2) [1]. Figure 2 Example of a generated recipe by the Inverse Cooking Algorithm [1] In the past, algorithms have been using simple systems of recipe retrieval based on image similarities in an embedding space. This approach is highly dependent on the quality of the learned embedding, dataset size and variability. Therefore, these approaches fail when there is no match between the input image and the static dataset [1]. Inverse cooking algorithm instead of retrieving a recipe directly from an image, proposes a pipeline with an intermediate step where the set of ingredients is first obtained. This allows the generation of the instructions not only taking into account the image, but also the ingredients (Figure 1) [1]. Figure 3 Inverse Cooking recipe generation model with the multiple encoders and decoders, generating the cooking instructions [1] One of the major achievements of this method was to present higher accuracy than a baseline recipe retrieval system [2] and average human [1], while trying to predict the ingredients from an image. Figure 4 Left: IoU and F1 scores for ingredients obtained with retrieval approach [2], Facebook’s method (Ours) and humans. Right: Recipe success rate according to human judgment [1] Inverse Cooking algorithm was included in a food recommendation system app developed and published here. Based on the predicted ingredients in the web application, several suggestions are provided to the user, such as: different ingredient combinations (Figure 1). References [1] A. Salvador, M. Drozdzal, X. Giro-i-Nieto and A. Romero, “Inverse Cooking: Recipe Generation from Food Images,” Computer Vision and Pattern Recognition, 2018. [2] A. Salvador, N. Hynes, Y. Aytar, J. Marin, F. Ofli, I. Weber and A. Torralba, “Learning cross-modal embeddings for cooking recipes and food images,” Computer Vision and Pattern Recognition, 2017. [3] Towards Data Science, “Building a Food Recommendation System,” 2020. [Online]. Available: https://towardsdatascience.com/building-a-food-recommendation-system-90788f78691a. [Accessed 18 May 2020].
https://towardsdatascience.com/facebook-inverse-cooking-algorithm-88aa631e69c7
['Luís Rita']
2020-05-20 15:45:00.665000+00:00
['Health', 'Inverse Cooking Algorithm', 'Deep Learning', 'Food Recommendation', 'Recursive Neural Networks']
Using Convolutional Neural Networks to Predict Pneumonia
Using Convolutional Neural Networks to Predict Pneumonia An introduction to CNNs and their applications. This blog post will start with a brief introduction and overview of convolutional neural networks and will then transition over to applying this new knowledge by predicting pneumonia from x-ray images with an accuracy of over 92%. While such an accuracy is nothing to get too excited about, it is a respectable result for such a simple convolutional neural network. When it comes time to show the code examples, the code will be shown first, then below each code example there will be an explanation about the code. Dataset: Pneumonia X-Ray Dataset A Brief History of CNNs and the Visual Cortex Convolutional Neural Networks (CNNs), or ConvNets, are neural networks that are commonly used for image and audio recognition and classification. CNNs stem from an animal brain’s visual cortex. Studies have shown that monkey’s and cat’s visual cortexes have neurons that respond to small subfields of the visual field. Every neuron is responsible for a small section of the visual field, called the receptive field. Together, all neurons in the visual cortex will cover the entire visual space (Hubel, 1968). The human brain’s visual cortex is made up of columns of neurons that share a similar function. An array of these neuronal columns make up what is called a module (Neuroscientifically Challenged, 2016). Each module is capable of responding to only a small subsection of the visual field and therefore, the visual cortex consists of many of these modules to cover the entire area. While this is not exactly how our convolutional neural network will function, there will be noticeable similarities to an animal’s visual cortex. A Brief Introduction to CNNs Like all common neural networks, CNNs have neurons with adjustable weights and biases. Normal neural networks are fully connected, meaning that every neuron is connected to every neuron from the previous layer. CNNs are not fully connected like normal neural networks though, as his would be too computationally expensive and is simply not needed to achieve the desired results. Using a fully connected neural network would not be very efficient when dealing with image data with large input sizes. To imagine the large number of parameters, think of our chest X-ray images. These images will have an input shape of 64x64x3, or 64 wide, 64 high, with 3 color channels. If a fully connected neural network were to be used, this would mean that a single neuron in a single hidden layer would consist of 12,288 connections (64 x 64 x 3 = 12,288) (CS231n, 2018). This is with only one fully connected neuron. Imagine the number of all the weights in a neural network with many neurons! It is easy to understand why fully connected neural networks would not be the most efficient method of classifying images. This is where CNNs come in handy, except a CNN’s architecture does in fact include a few fully connected layer(s). A Brief Introduction to a CNN’s Architecture Like all neural networks, CNNs have an input and output layer with a number of hidden layers that will apply an activation function, typically ReLu. A CNNs design will consist of three main layers: Convolutional layer, pooling layer, and the fully connected layer. Each layer will be covered below: Convolutional Layer The convolutional layer is responsible for finding and extractng features from the input data. Convolutional layers use filters, also called kernels, for this feature extracting process. Since CNNs are not fully connected, neurons are only connected to a predetermined subregion of the input space. The size of this region is called the filter size, or receptive field. The receptive field of a neuron is simply the space that it will receive inputs from. For this example, we will be using a filter size of 3x3. We only set the width and height of the receptive field because the depth of the filter must be the same depth as the input and is automatically set. In our case, our input has 3 color channels. Therefore, the input’s depth is 3. This means that each neuron in this convolutional layer will have 27 weights (3x3x3 = 27). A convolutional layer convolves the input by sliding these filters around the input space while computing the dot product of the weights and inputs. The pixels within the filter will be converted to a single value that will represent the entire receptive field. Pooling Layer Pooling layers, otherwise known as downsampling layers, will mostly be seen following convolutional layers of the neural network. The job of the pooling layer is to reduce the spatial dimensions of the input. This will result in a reduced number of parameters and will also help our model generalize and avoid overfitting. This blog post will be using max pooling, the most commonly used type of pooling layer. There are other versions of the pooling layer such as average pooling, but the focus of this post will be on max pooling. Max Pooling: The convolutional layer will find a specific feature in a region within the input and will assign it a higher activation value. The pooling layer will then reduce this region and create a new representation. The max pooling layer essentially creates an abstraction of the original region by using the max values found in each subregion. Max pooling will sweep over each subregion, apply a max filter that will extract the highest value from each subregion and create an abstraction with reduced dimensions. The example below shows a 4x4 matrix as our input. We will be using a 2x2 filter to sweep over our input matrix and we will also be using a stride of 2. The 2x2 pool size, or filter, will determine the amount by which we downscale the spatial dimensions. For a 2x2 pool size, we will downscale by half each time. The stride will determine the amount of steps to move while scanning the input matrix. For example, with a stride of 2, we will scan the input matrix from the red 2x2 to the green 2x2, etc. The region being scanned will move two blocks over each time. Fully Connected Layer Like normal neural networks, each neuron in the fully connected layer of a CNN is connected to every neuron in the previous layer. The fully connected layers are responsible for classifying the data after feature extraction. The fully connected layer will look at the activation maps of high-level features created by the convolutional layers or pooling layers and will then determine which features are associated with each class. For our dataset, we have two classes: pneumonia and normal. The fully connected layers will look at the features the previous layers have found and will then determine which features best help predict the class the image will fall under. A Brief Introduction to Pneumonia Every year in the United States alone, about one million people will visit the hospital due to pneumonia. Of those one million people, about 50,000 people will die from pneumonia each year (CDC, 2017). Pneumonia is an infectious inflammatory disease that affects the lungs of people of all ages and is typically caused by viral or bacterial infections. Pneumonia affects one or both sides of of the lungs and causes the alveoli (air sacs) to fill up with fluid, bacteria, microorganisms and pus (NIH, 2018). Pneumonia is diagnosed in many ways, one common way of confirmation is through chest X-rays. Chest X-rays are the best tests, and most accurate, to determine if one has pneumonia. While it is crucial, detecting pneumonia can sometimes be a difficult task. Pneumonia often vaguely shows up in X-rays and can also get mixed in with other diseases present in that local area. Data Preparation and Analysis The first portion of the code will be dedicated to preparing the data. This section will be less about the details than the actual building of the model. I put all of the code in this Data Preparation and Analysis section, excluding the visuals, in a separate file called pneumonia_dataset for later importing in the Applying CNNs to Predicting Pneumonia section. Importing the dataset from this file will be explained at the beginning of the section. import os import numpy as np import matplotlib.pyplot as plt from glob import glob from keras.preprocessing.image import ImageDataGenerator The first few lines are importing the libraries we will need for preparing and visualizing our data. path = "./chest_xray" dirs = os.listdir(path) print(dirs) Output: Output: ['.DS_Store', 'test', 'train', 'val'] Here we are setting the path of the chest_xray folder for later use. We are then printing out the directories from within the chest_xray folder. Notice that the folder is split into three subfolders: test, train and val, or validation. Each folder contains chest X-ray images that we will need to use for training and testing. train_folder = path + '/train/' test_folder = path + '/test/' val_folder = path + '/val/' train_dirs = os.listdir(train_folder) print(train_dirs) Output: Output: ['.DS_Store', 'PNEUMONIA', 'NORMAL'] Next, we will set the paths for each folder. We can use the “path” variable we set earlier and concatenate that with each subfolder’s name. We will then want to see the contents of the training folder. To view the directories, we will use the listdir() function for the training folder, then print the results. train_normal = train_folder + 'NORMAL/' train_pneu = train_folder + 'PNEUMONIA/' We can then take our training folder and and set the paths to each class. In this case, we have two classes: the normal images and the pneumonia images. If we want to visualize images that are specifically “normal” or “pneumonia”, then we will create a variable that contains the path to these images for later reference. pneu_images = glob(train_pneu + "*.jpeg") normal_images = glob(train_normal + "*.jpeg") Now that we have split the training folder into “normal” and “pneumonia”, we can pull all of the images out of each class. The images in this dataset are all jpeg images, so for each path we will add .jpeg at the end to make sure we are pulling out the images. def show_imgs(num_of_imgs): for img in range(num_of_imgs): pneu_pic = np.asarray(plt.imread(pneu_images[img])) normal_pic = np.asarray(plt.imread(normal_images[img])) fig = plt.figure(figsize= (15,10)) normal_plot = fig.add_subplot(1,2,1) plt.imshow(normal_pic, cmap='gray') normal_plot.set_title('Normal') plt.axis('off') pneu_plot = fig.add_subplot(1, 2, 2) plt.imshow(pneu_pic, cmap='gray') pneu_plot.set_title('Pneumonia') plt.axis('off') plt.show() We will create a function called show_imgs() to visualize the chest X-ray images from within our training set. The function will take one argument that specifies how many images to show ( num_of_imgs ). We will then use a for loop with a range of "num_of_imgs" to show however many images is specified. We will be showing normal images and pneumonia images side by side so we will add two sub plots: one for normal, one for pneumonia. The color map for these images will be ‘grays’. If you feel like changing the color map, head over to Matplotlb’s color map reference page. For each image shown, we will label it as either “normal” or “pneumonia” by setting each sub plot’s title. show_imgs(3) We can use our show_imgs() function like this. We will call the function and give it one argument: the number of images of both classes we would like to show. train_datagen = ImageDataGenerator(rescale = 1/255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, rotation_range = 40, width_shift_range = 0.2, height_shift_range = 0.2) This is called image preprocessing, or data augmentation. We will be using the ImageDataGenerator() class from Keras for our data augmentation. Data augmentation helps us to expand our training dataset. The more training data and the more variety the better. With more training data and with slightly manipulated data, overfitting becomes less of a problem as our model has to generalize more. The first step is to rescale our data. Rescaling images is a common practice because most images have RGB values ranging from 0–255. These values are too high for most models to handle, but by multiplying these values by 1/255, we can condense each RGB value to a value between 0–1. This is much easier for our model to process. Next we have shear_range which will randomly apply shear mapping, or shear transformations to the data. The value "0.2" is the shear intensity, or shear angle. which will randomly apply shear mapping, or shear transformations to the data. The value "0.2" is the shear intensity, or shear angle. zoom_range is also set to "0.2". This is for randomly zooming in on the images. is also set to "0.2". This is for randomly zooming in on the images. horizontal_flip is set to "True" because we want to randomly flip half of the images in our dataset. is set to "True" because we want to randomly flip half of the images in our dataset. rotation_range is the value in degrees for which the image may be randomly rotated. is the value in degrees for which the image may be randomly rotated. width_shift_range and height_shift_range are ranges for randomly translating images. test_datagen = ImageDataGenerator(rescale = 1/255) This is where we rescale our test set. The test set does not need all of the same transformations applied to the training data. Only the training data can be manipulated to avoid overfitting. The test set must be the original images to accurately predict pneumonia on real, minimally manipulated, images. training_set = train_datagen.flow_from_directory(train_folder, target_size= (64, 64), batch_size = 32, class_mode = 'binary') val_set = test_datagen.flow_from_directory(val_folder, target_size=(64, 64), batch_size = 32, class_mode ='binary') test_set = test_datagen.flow_from_directory(test_folder, target_size= (64, 64), batch_size = 32, class_mode = 'binary') Output: Found 5216 images belonging to 2 classes. Found 16 images belonging to 2 classes. Found 624 images belonging to 2 classes. Now we will take the path of our test, train, and validation folders and generate batches of augmented data using flow_from_directory() from Keras. The first argument will be the directory to pull from. The second argument is the target size, or the dimensions of the images after they are resized. The third argument is “class_mode”, which is set to “binary”. This will return 1D binary labels. This dataset calls for binary classification due to the fact that there are only two classes. Now that we are done preparing our data, we can move on to building the model, training it, then testing it and getting our results in the form of accuracy scores. Applying CNNs to Predicting Pneumonia from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, BatchNormalization, Dropout import pneumonia_dataset Keras is a high-level python neural network library that runs on top of TensorFlow. Keras enables quick and efficient implementation and experimentation of deep learning and machine learning algorithms while still being very effective. Keras will be our deep learning library of choice for this blog post so we will be importing a few required layers and models to make our convolutional neural network function well. The last import statement is the pneumonia_dataset file that I mentioned earlier in the Data Preparation and Analysis section. training_set, test_set, val_set = pneumonia_dataset.load_data() Output: Training Set: Found 5216 images belonging to 2 classes. Validation Set: Found 16 images belonging to 2 classes. Test Set: Found 624 images belonging to 2 classes. The pneumonia_dataset file will return a training set, test set and a validation set that we will appropriately name. This will return a summary of how our data is split up amongst each set, including the number of images in each set and how many classes these images will fit into. model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3), padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(128, activation = 'relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation = 'sigmoid')) This is the exciting part. First, we create our model using the “Sequential” model from Keras. This model is a linear stack of layers, meaning that we will create our model layer-by-layer. 1st Convolutional Layer: The first convolutional layer is our input layer. The first convolutional layer is our input layer. The first parameter is the amount of convolutional filters to use in the layer, which is set to “32”. This is also the number of neurons, or nodes, that will be in this layer. The second parameter is the filter’s size, or the receptive field. Imagine we are creating a window of the size (3, 3), or a width of three and a height of three, that our convolutional layer is restricted to looking through at any given time. The third parameter we will set is the activation function. Our nonlinear activation function is ReLu, or rectified linear unit. The ReLu function is f(x) = max(0, x) . Therefore, all negatives are converted to zeros while all positives remain the same. ReLu is one of the most popular activation functions because it reduces the the vanishing gradient issue and is computationally cheaper to compute. This does not mean that the ReLu function is perfect, but it will get the job done for most applications. . Therefore, all negatives are converted to zeros while all positives remain the same. ReLu is one of the most popular activation functions because it reduces the the vanishing gradient issue and is computationally cheaper to compute. This does not mean that the ReLu function is perfect, but it will get the job done for most applications. The fourth parameter is the input shape. This parameter only needs to be specified in the first convolutional layer . After the first layer, our model can handle the rest. The input shape is simply the shape of the images that will be fed to the CNN. The shape of our input images will be (64, 64, 3) (width, height, depth). . After the first layer, our model can handle the rest. The input shape is simply the shape of the images that will be fed to the CNN. The shape of our input images will be (64, 64, 3) (width, height, depth). The final parameter is the padding, which is set to “same”. This will pad the input in a way that makes the output have the same length as the initial input. 1st Max Pooling Layer: The max pooling layers will only have one parameter for this model. The max pooling layers will only have one parameter for this model. 2nd Convolutional and Max Pooling Layer: The second convolutional layer and max pooling layer will be the same as the previous layers above. The second convolutional layer will not need the input size to be specified. The second convolutional layer and max pooling layer will be the same as the previous layers above. The second convolutional layer will not need the input size to be specified. 3rd Convolutional Layer: In the third convolutional layer, the first parameter will be changed. In the first two convolutional layers, the number of filters, or neurons in the layer, was set to “32”, but for the third layer it will be set to “64”. Other than this one change, everything else will stay the same. In the third convolutional layer, the first parameter will be changed. In the first two convolutional layers, the number of filters, or neurons in the layer, was set to “32”, but for the third layer it will be set to “64”. Other than this one change, everything else will stay the same. 3rd Max Pooling Layer: The third max pooling layer will be the same as the first two previous pooling layers. The third max pooling layer will be the same as the first two previous pooling layers. Flatten: Flattening is required to convert multi-dimensional data into usable data for the fully connected layers. In order for the fully connected layers to work, we need to convert the convolutional layer’s output to a 1D vector. Our convolutional layers will be using 2D data (images). This will have to be reshaped, or flattened, to one dimension before it is fed into the classifier. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= max_pooling2d_16 (MaxPooling (None, 6, 6, 64) 0 _________________________________________________________________ flatten_5 (Flatten) (None, 2304) 0 _________________________________________________________________ Dense — ReLu: Dense layers are the fully connected layers, meaning that every neuron is connected to all the neurons in previous layers. We will be using 128 nodes. This also means that the fully connected layer with have an output size of 128. For this fully connected layer, the ReLu activation function will be used. Dense layers are the fully connected layers, meaning that every neuron is connected to all the neurons in previous layers. We will be using 128 nodes. This also means that the fully connected layer with have an output size of 128. For this fully connected layer, the ReLu activation function will be used. Dropout: Dropout is used to regularize our model and reduce overfitting. Dropout will temporarily “drop out” random nodes in the fully connected layers. This dropping out of nodes will result in a thinned neural network that consists of the nodes that were not dropped. Dropout reduces overfitting and helps the model generalize due to the fact that no specific node can be 100% reliable. The “.5” means that the probability of a certain node being dropped is 50%. To read more about dropout, check out this paper. Dropout is used to regularize our model and reduce overfitting. Dropout will temporarily “drop out” random nodes in the fully connected layers. This dropping out of nodes will result in a thinned neural network that consists of the nodes that were not dropped. Dropout reduces overfitting and helps the model generalize due to the fact that no specific node can be 100% reliable. The “.5” means that the probability of a certain node being dropped is 50%. To read more about dropout, check out this paper. Dense — Sigmoid: Our final fully connected layer will use the sigmoid function. Our problem involves two classes: Pneumonia and normal. This is a binary classification problem where sigmoid can be used to return a probability between 0 and 1. If this were a multi-class classification, the sigmoid activation function would not be the weapon of choice. However, for this simple model, the sigmoid function works just fine. The sigmoid function can be defined as: model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) We can now configure the model using the compile method from Keras. The first argument is the optimizer which will be set to “adam”. The adam optimizer is one of the most popular algorithms in deep learning right now. The authors of Adam: A Method for Stochastic Optimization state that Adam combines the advantages of two other popular optimizers: RMSProp and AdaGrad. You can read about the effectiveness of Adam for CNNs in section 6.3 of the Adam paper. The second argument is the loss function. This model will use the binary cross entropy loss function. Our model will be conducting binary classification, so we can write this loss function as shown below, where “y” is either 0 or 1, indicating if the class label is the correct classification and where “p” is the model’s predicted probability: The last argument is the metric function that will judge the performance of the model. In this case, we want the accuracy to be returned. model_train = model.fit_generator(training_set, steps_per_epoch = 200, epochs = 5, validation_data = val_set, validation_steps = 100) Output: Epoch 1/5 200/200 [==============================] - 139s 697ms/step - loss: 0.2614 - acc: 0.8877 - val_loss: 0.5523 - val_acc: 0.8125 Epoch 2/5 200/200 [==============================] - 124s 618ms/step - loss: 0.2703 - acc: 0.8811 - val_loss: 0.5808 - val_acc: 0.8125 Epoch 3/5 200/200 [==============================] - 124s 618ms/step - loss: 0.2448 - acc: 0.8984 - val_loss: 0.7902 - val_acc: 0.8125 Epoch 4/5 200/200 [==============================] - 121s 607ms/step - loss: 0.2444 - acc: 0.8955 - val_loss: 0.8172 - val_acc: 0.7500 Epoch 5/5 200/200 [==============================] - 119s 597ms/step - loss: 0.2177 - acc: 0.9092 - val_loss: 0.8556 - val_acc: 0.6250 It is now time to train the model! This will be done using the fit_generator() method from Keras. This will train the model on batches of data that are generated from the training set. The first argument is the steps per epoch. This will be set to 200. The steps per epoch will tell the model the total number of batches of samples to produce from the generator before concluding that specific epoch. The second argument is the number of epochs, or training iterations. The Keras documentation states that an epoch is defined as an iteration over the entire data provided, as defined by steps_per_epoch. The third argument is the validation data the model will use. The model will not be trained on the validation data, but this will help measure the loss at the end of every epoch. The final argument is the validation steps. our validation data is coming from a generator (see above code), so the number of batches of samples to produce from the generator must be set, similar to the steps per epoch. test_accuracy = model.evaluate_generator(test_set,steps=624) print('Testing Accuracy: {:.2f}%'.format(test_accuracy[1] * 100)) Output: Testing Accuracy: 90.22% Now that the model has been trained, it is time to evaluate the model’s accuracy on the test data. This will be done by using the evaluate_generator() method from Keras. This evaluation will return the test set loss and accuracy results. Just like the fit generator, the first argument for the evaluate generator is the folder from which to pull samples from. Since we are testing our model’s accuracy, the test set will be used. The second argument is the number of batches of samples to pull from the generator before finishing. We can then print the accuracy and shorten it to only show two decimal places. The accuracy will be returned as a value between 0–1, so we will multiply it by 100 to receive the percentage. After that, the model is complete! We have had some success with predicting pneumonia from chest X-ray images! Everything Put Together (20 Epochs) Output: Found 5216 images belonging to 2 classes. Found 16 images belonging to 2 classes. Found 624 images belonging to 2 classes. Epoch 1/20 200/200 [==============================] - 142s 708ms/step - loss: 0.5141 - acc: 0.7369 - val_loss: 0.6429 - val_acc: 0.6250 Epoch 2/20 200/200 [==============================] - 137s 683ms/step - loss: 0.4034 - acc: 0.8058 - val_loss: 0.6182 - val_acc: 0.7500 Epoch 3/20 200/200 [==============================] - 134s 670ms/step - loss: 0.3334 - acc: 0.8483 - val_loss: 0.6855 - val_acc: 0.6875 Epoch 4/20 200/200 [==============================] - 129s 644ms/step - loss: 0.3337 - acc: 0.8516 - val_loss: 0.8377 - val_acc: 0.6875 Epoch 5/20 200/200 [==============================] - 139s 696ms/step - loss: 0.3012 - acc: 0.8672 - val_loss: 0.6252 - val_acc: 0.8750 Epoch 6/20 200/200 [==============================] - 132s 662ms/step - loss: 0.2719 - acc: 0.8808 - val_loss: 0.6599 - val_acc: 0.6875 Epoch 7/20 200/200 [==============================] - 125s 627ms/step - loss: 0.2503 - acc: 0.8969 - val_loss: 0.6470 - val_acc: 0.7500 Epoch 8/20 200/200 [==============================] - 128s 638ms/step - loss: 0.2347 - acc: 0.9016 - val_loss: 0.8703 - val_acc: 0.6875 Epoch 9/20 200/200 [==============================] - 131s 656ms/step - loss: 0.2337 - acc: 0.9075 - val_loss: 0.6313 - val_acc: 0.6875 Epoch 10/20 200/200 [==============================] - 124s 619ms/step - loss: 0.2159 - acc: 0.9133 - val_loss: 0.7781 - val_acc: 0.7500 Epoch 11/20 200/200 [==============================] - 129s 647ms/step - loss: 0.1962 - acc: 0.9228 - val_loss: 0.6118 - val_acc: 0.8125 Epoch 12/20 200/200 [==============================] - 127s 634ms/step - loss: 0.1826 - acc: 0.9306 - val_loss: 0.5831 - val_acc: 0.8125 Epoch 13/20 200/200 [==============================] - 128s 638ms/step - loss: 0.2071 - acc: 0.9178 - val_loss: 0.4661 - val_acc: 0.8125 Epoch 14/20 200/200 [==============================] - 124s 619ms/step - loss: 0.1902 - acc: 0.9234 - val_loss: 0.6944 - val_acc: 0.7500 Epoch 15/20 200/200 [==============================] - 128s 638ms/step - loss: 0.1763 - acc: 0.9281 - val_loss: 0.6350 - val_acc: 0.6875 Epoch 16/20 200/200 [==============================] - 139s 696ms/step - loss: 0.1727 - acc: 0.9337 - val_loss: 0.4813 - val_acc: 0.8750 Epoch 17/20 200/200 [==============================] - 145s 724ms/step - loss: 0.1689 - acc: 0.9334 - val_loss: 0.3188 - val_acc: 0.7500 Epoch 18/20 200/200 [==============================] - 133s 664ms/step - loss: 0.1650 - acc: 0.9366 - val_loss: 0.4164 - val_acc: 0.8750 Epoch 19/20 200/200 [==============================] - 132s 661ms/step - loss: 0.1755 - acc: 0.9316 - val_loss: 0.5974 - val_acc: 0.8125 Epoch 20/20 200/200 [==============================] - 132s 662ms/step - loss: 0.1616 - acc: 0.9395 - val_loss: 0.4295 - val_acc: 0.8750 Testing Accuracy: 92.13% Conclusion The convolutional neural network acheived a 92.13% accuracy on the test set. You can determine for yourself if that should be labeled as a “success”. For a simple model, I would consider it to be pretty reasonable.
https://towardsdatascience.com/using-convolutional-neural-networks-to-predict-pneumonia-550b773cacff
['Aidan Wilson']
2019-10-24 14:23:39.142000+00:00
['Deep Learning', 'Neural Networks', 'Health', 'Medicine', 'Machine Learning']
SelectUntilDestroyed in Angular
SelectUntilDestroyed in Angular A custom RxJS operator to use with NgRx When using NgRx, we rely a lot on selecting pieces of the store to build our components. Avoiding in-component subscriptions is a good thing, so we normally have containers that hold the observables returned by selectors and we use the async pipe to pass the actual data down to be displayed by the presenters. Sometimes, we do need to subscribe to selectors, and every subscription is followed by the need to unsubscribe. Out of the box, the best way to unsubscribe from a lot of observables is as follows: Which means that for every select store, we need to: It would be great to group both select and takeUntil , right? We can achieve that by writing a custom RxJS operator, like this: The downside is that in order to use it, we'd still need to define the component destroyed ( Subject ), call next , and complete on it: If we could find a way of wrapping that Subject creation, the takeUntil , and listen to the component's ngOnDestroy to trigger the next and complete , that would be awesome. That's where ngneat/until-destroy comes into play. They provide a custom operator that does exactly what we need. By combining untilDestroy to our custom operator and adding the expected type declaration, we can get to this: Where we use it like:
https://medium.com/better-programming/selectuntildestroyed-custom-rxjs-operator-to-use-with-ngrx-428baf7a482d
['João Victor Ghignatti']
2020-07-30 14:21:39.095000+00:00
['Angular', 'JavaScript', 'Rxjs', 'Typescript', 'Ngrx']
Could Barefoot Running Fix my Plantar Fasciitis?
Photo by Pixabay from Pexels Plantar Fasciitis develops due to an insidious overload through the Plantar Aponeurosis (Plantar Fascia) and is characterised by plantar heel pain; usually reported during the first steps taken in the morning or after a long bout of inactivity (Grieve & Palmer., 2017). In coordination with the intrinsic foot muscles, the structure provides support to the arch of the foot, as well as deliver sensory feedback and motor control of the foot. This pathology is described as a sharp sensation developed in the heel and radiating into the arch of the foot (Huffer et al., 2017). Excessive load to the Plantar Fascia leads to a strenuous stretch of the tissue — which in turn leads to the development of microtraumas and subsequent alterations to the architecture of the connective tissue. Ultimately resulting in a degeneration of the Plantar Fascia (Ribeiro et al., 2015). From a sample of 3500 runners, across 8 studies, Plantar Fasciitis was highlighted as the third most common running-related musculoskeletal, meaning the pathology is still affecting a large proportion of runners (Grieve & Palmer., 2017; Lopes et al., 2012). Meaning that alternative treatment and prevention strategies may need to be explored. This research also alluded to the potential detrimental effects of utilising a rearfoot strike (RFS) running technique. However, it was concluded that risk factors for Plantar Fasciitis are multifactorial and there is limited evidence to identify a selection of factors that leave more susceptible to Plantar Fasciitis. Photo by Daniel Reche from Pexels Within this article, we’re going to explore and challenge the traditional approaches to treating this pathology. Whilst also highlighting how some physical factors MIGHT be exposing particular runners to a higher risk of developing Plantar Fasciitis and how modified barefoot running MIGHT be a welcomed change for their bodies. The Risk Factors: Ankle Dorsiflexion Riddle et al (2003) concluded that the lack of passive dorsiflexion in the ankle, demonstrated by participants diagnosed with Plantar Fasciitis, supported the hypothesis that the reduced range of movement, played a role in the etiology. However, due to these participants being collected after being diagnosed with the pathology, there is not a clear way of identifying if this reduction is dorsiflexion was as a result of acquiring Plantar Fasciitis; or if it was a factor which led to the onset of the pathology. The study incorporated rigid inclusion criteria of symptoms, which was mirrored by case reports in runners (Francis et al., 2017; Saxelby et al., 2017). But, despite eight subjects reporting being recreational runners, there is no way of matching their specific risk factors to the rest of the running population. Bias was also accounted for between examiners, by blinding them from the knowledge of which group the subject came from, as well as which side was affected and the identity of the participant. The role of a shortened Achilles tendon was also considered, and the further biomechanical consequences it could have led to. The proposed mechanism for a reduction in ankle dorsiflexion is linked to a shortened Achilles tendon — which would cause compensatory foot pronation (Irving et al., 2007). Despite the authors stating that the findings should not be generalised to athletes/competitive runners, it could have been argued that the same mechanical principles could still be applied to runners. Ribeiro et al. (2015) demonstrated those with unilateral Plantar Fasciitis (in both acute and chronic stages) conveyed a higher plantar load when striking with their rearfoot (landing heel first). This adoption of an RFS by runners did not carry a clear causality; although it’s a foot strike pattern used by ~75% of runners (Larson et al., 2011). The data extracted from this study demonstrated high levels of ecological validity, due to the sample consisting of 286 sub-elite runners (predominantly recreational). Furthermore, external risk factors, particularly fatigue were accounted for, by recording foot-strike patterns within two points in the race (10 km and 32 km). This highlighted whether recorded fore-foot strikes adopted an alternative RFS pattern during the latter stages of the race. Four runners changed from forefoot striking (FFS) at 10km to RFS at 32km. High Impact Peaks & Rear Foot Strike RFS can produce a significant impact transient, which has been associated with higher loading rates when compared to alternative strike patterns (Altman & Davies, 2015). For example, an FFS, which is classified by the ball of the foot making initial contact with the ground, prior to the heel (Lieberman et al., 2010). FFS is associated with barefoot running, and RFS predominates in shod runners (Larson et al., 2011; Lieberman et al., 2010). The research has shown that these high impact transients were significantly higher in the shod runners who struck the ground with their rearfoot. Which only increased further as the shod runners were placed into the barefoot condition, and maintained their usual strike pattern. An RFS pattern may have predisposed the sample to a repetitive and excessive load through their proximal plantar region (Altman & Davis, 2012). Additionally, between the shod and unshod group, it was indicated that the use of trainers facilitated the use of an RFS, due to the thickest portion of cushioning being underneath the heal. Perhaps the constant use of trainers by shod runners could be another predisposing factor, leaving them at a higher chance of acquiring Plantar Fasciitis. Maybe an intermittent use of barefoot running, which could’ve encouraged an alternative footstrike pattern, and impact transient could alter this. Potentially offering a chance to load the Plantar Fascia structure directly and increase the surrounding intrinsic foot musculature (McKeon et al., 2014). Photo by Mateusz Dach from Pexels One of the biggest limiting factors of this research was the conditions in which they were conducted. Due to the tests being conducted on a treadmill, which provides a constant flat and stable surface, unlike the frequent alternations in surfaces, distance runners may face. An FFS pattern has been shown to increase the tensile load through the Plantar Fascia (Chen et al., 2019). Because of this, it could be suggested that moderated loading and lengthening of the structure, could reproduce similar results to that of an eccentric loading intervention to treat an Achilles tendinopathy (Alferdson et al., 1998). When the calf muscles were loaded eccentrically, patients reported reduced pain levels and a return to pre-injury activity levels after a 12-week intervention (Mafi et al., 2000). The Achilles tendon and Plantar Fascia, demonstrate similar physiological properties (Orner et al., 2018). This could imply that a similar, moderated loading protocol could reproduce similar results for Plantar Fasciitis, as it has for Achilles tendinopathies (Murphy et al., 2018). This could be achieved through inducing an FFS, via barefoot running. Additionally, reduced intrinsic foot musculature has been associated with Plantar Fasciitis (Chang et al., 2012). It could be suggested increasing the surrounding musculature, could facilitate decreasing tensile loads on the Plantar Fascia and other passive structures that form the foot and ankle. Developed intrinsic foot musculature has also been implied to have occurred as a result of barefoot running (McKeon et al., 2014). Photo by Mayo Foundation For Medical Education and Research Current Treatment Recent studies have indicated that “Advice”, “Education on Plantar Fasciitis” and “Self-management” are methods of treatment that are heavily utilised by Physiotherapists within the UK. This is understandable given the multifactorial considerations and poor understanding of Plantar Fasciitis risk factors (Grieve & Palmer., 2017; Irving et al., 2006). However, the data hasn’t highlighted the effectiveness of these treatments and the success rate of the plans — which is incredibly hard given that the general population’s rehabilitation goals are vastly different from that of a seasoned runner. After suffering from Plantar Fasciitis for 40 years, a 62-year-old male runner was put through 8-week load management and biomechanical alteration intervention (Saxelby et al., 2017). To monitor fluctuations in pain, a numeric rating scale (NRS) was used to convey how the runner responded to the intervention. After identifying the participant had an RFS pattern, he altered that to a midfoot strike (MFS) pattern, which was tracked with a 9-axis inertial sensor. Post-8-week intervention the runner reported a significant decrease in pain levels — but obviously, this does not show that this treatment plan would have the same effect on another runner of different age, sex, or injury history. Similar findings were indicated from a 27-year-old female triathlete who had suffered from left-sided medial calcaneal pain for 12 months. A similar NRS was adopted by the therapist to measure pain levels in the morning and post-session. 6-weeks post-intervention the athlete’s symptoms dropped from 6/10 to 2/10 in the morning, but then increased back up to 4/10 after completing a 15-minute jog in her normal running shoes. The triathlete was then prescribed barefoot running on the same surface, after an observed alternation to an MFS and reduction in pain the following morning, was reported (Francis et al., 2017). Photo by Min An from Pexels Clearly, there is an argument to be made for the potential of barefoot running to be applied to a treatment plan for a runner with on-going Plantar Fasciitis. However, it does need to be made clear that this article isn’t advocating that you go and run 10km barefoot on your usual road route. Or if you are suffering from Plantar Fasciitis this doesn’t mean start running in your local neighborhood barefoot and expect to make a good recovery. Firstly, if you don’t suffer from Plantar Fasciitis, but you do use an RFS, this article isn’t saying that you will develop it. Don’t try and fix what isn’t broken. Secondly, if you’ve only just started feeling Plantar Fasciitis-like symptoms then PLEASE consult with a Healthcare Professional (HCP)for advice and management. They will guide you in a direction that they see fit for you. That may be a load management program similar to the ones that didn’t seem to work for the previously mentioned case studies — which may work for you. It may even incorporate a small dose of barefoot running on grass or sand — which may also work. No one model fits all! Photo by Artūras Kokorevas from Pexels Let's summarise all of this: Can barefoot running help fix your Plantar Fasciitis? Yes. It most definitely has the potential to. Has barefoot running been an integral part of previous rehabilitation programs that have helped people recover in the past? Absolutely. Should I go out and run barefoot on concrete on my next run? No. Too hard, too soon. Try some loops of your local field or lengths of the beach. Can I damage my feet from barefoot running? Yes. If you’re going to run barefoot anywhere, make sure you’ve checked the area or are very familiar with the surroundings and what wildlife will surround it. Be VERY careful that you don’t step on glass. Should I try switching to barefoot running even though I am not injured? I would not advise it, because you should be enjoying the fact that you’re running as you are with no problems. But if you want to try integrating it into your life then PLEASE do as much research on it as you can beforehand. Will I enjoy running barefoot? Yes. I do, but I also enjoy doing a long run down the country lanes on a Sunday, with my Nike Zoom Flys on. It’s about balance. If you are currently suffering from Plantar Fasciitis then seek advice from an HCP; don’t throw yourself too deep into a brand new form of running too soon; be open-minded with your rehabilitation and try new things later on. Finding things that work further down the line are what makes it fun. MEDICAL DISCLAIMER: The following information is intended for informational purposes only. Consult an HCP to help you diagnose and treat injuries of any kind. References: Alfredson, H., Pietilä, T., Jonsson, P. & Lorentzon, R. (1998) Heavy-Load Eccentric Calf Muscle Training For the Treatment of Chronic Achilles Tendinosis. The American Journal of Sports Medicine, 26 (3), pp.360–366. Altman, A. & Davis, I. (2012) Barefoot Running. Current Sports Medicine Reports, 11 (5), pp.244–250. Altman, A. & Davis, I. (2015) Prospective comparison of running injuries between shod and barefoot runners. British Journal of Sports Medicine, 50 (8), pp.476–480. Chang, R., Kent-Braun, J. & Hamill, J. (2012) Use of MRI for volume estimation of tibialis posterior and plantar intrinsic foot muscles in healthy and chronic plantar fasciitis limbs. Clinical Biomechanics, 27 (5), pp.500–505. Chen, T., Wong, D., Wang, Y., Lin, J. & Zhang, M. (2019) Foot arch deformation and plantar fascia loading during running with rearfoot strike and forefoot strike: A dynamic finite element analysis. Journal of Biomechanics, 83, pp.260–272. Francis, P., Oddy, C. & Johnson, M. (2017) Reduction in Plantar Heel Pain and a Return to Sport After a Barefoot Running Intervention in a Female Triathlete With Plantar Fasciitis. International Journal of Athletic Therapy and Training, 22 (5), pp.26–32. Grieve, R. & Palmer, S. (2017) Physiotherapy for plantar fasciitis: a UK-wide survey of current practice. Physiotherapy, 103 (2), pp.193–200. Huffer, D., Hing, W., Newton, R. & Clair, M. (2017) Strength training for plantar fasciitis and the intrinsic foot musculature: A systematic review. Physical Therapy in Sport, 24, pp.44–52. Irving, D., Cook, J. & Menz, H. (2006) Factors associated with chronic plantar heel pain: a systematic review. Journal of Science and Medicine in Sport, 9 (1–2), pp.11–22. Irving, D., Cook, J., Young, M. & Menz, H. (2007) Obesity and Pronated Foot Type May Increase the Risk of Chronic Plantar Heel Pain. Medicine & Science in Sports & Exercise, 39 (Supplement), p.S69. Larson, P., Higgins, E., Kaminski, J., Decker, T., Preble, J., Lyons, D., McIntyre, K. & Normile, A. (2011) Foot strike patterns of recreational and sub-elite runners in a long-distance road race. Journal of Sports Sciences, 29 (15), pp.1665–1673. Lieberman, D., Venkadesan, M., Werbel, W., Daoud, A., D’Andrea, S., Davis, I., Mang’Eni, R. & Pitsiladis, Y. (2010) Foot strike patterns and collision forces in habitually barefoot versus shod runners. Nature, 463 (7280), pp.531–535. Lopes, A., Hespanhol, L., Yeung, S. & Costa, L. (2012) What are the Main Running-Related Musculoskeletal Injuries?. Sports Medicine, 42 (10), pp.891–905. Mafi, N., Lorentzon, R. & Alfredson, H. (2000) Superior short-term results with eccentric calf muscle training compared to concentric training in a randomized prospective multicenter study on patients with chronic Achilles tendinosis. Knee Surgery, Sports Traumatology, Arthroscopy, 9 (1), pp.42–47. McKeon, P., Hertel, J., Bramble, D. & Davis, I. (2014) The foot core system: a new paradigm for understanding intrinsic foot muscle function. British Journal of Sports Medicine, 49 (5), pp.290–290. Murphy, M., Travers, M., Gibson, W., Chivers, P., Debenham, J., Docking, S. & Rio, E. (2018) Rate of Improvement of Pain and Function in Mid-Portion Achilles Tendinopathy with Loading Protocols: A Systematic Review and Longitudinal Meta-Analysis. Sports Medicine, 48 (8), pp.1875–1891. Orner, S., Kratzer, W., Schmidberger, J. & Grüner, B. (2018) Quantitative tissue parameters of Achilles tendon and plantar fascia in healthy subjects using a handheld myotonometer. Journal of Bodywork and Movement Therapies, 22 (1), pp.105–111. Ribeiro, A., João, S., Dinato, R., Tessutti, V. & Sacco, I. (2015) Dynamic Patterns of Forces and Loading Rate in Runners with Unilateral Plantar Fasciitis: A Cross-Sectional Study. PLOS ONE, 10 (9), p.e0136971. RIDDLE, D., PULISIC, M., PIDCOE, P. & JOHNSON, R. (2003) RISK FACTORS FOR PLANTAR FASCIITIS. The Journal of Bone and Joint Surgery-American Volume, 85 (5), pp.872–877. Saxelby, J., Paya Gamez, J. & Heller, B. (2017) A Holistic Approach to Managing a Runner with Recalcitrant Plantar Fasciitis outside the Clinic. Podiatry Now, 20 (8), pp.8–12.
https://medium.com/illumination/could-barefoot-running-fix-my-plantar-fasciitis-da86d921a7c4
['Rhys Burton']
2020-08-23 15:06:51.924000+00:00
['Fitness', 'Lifestyle', 'Life', 'Running', 'Health']
Dives
Image Credit: Mine A smoky room and old rock and roll, the friendly scent of whiskey hangs in the air. The room breathes a lazy tune in time with a loose bassoon and the scratch of old vinyl, deep and warm. The old upright bass jumps thumping through the 12 bar blues as a beautiful Latina dances alone under low greasy lights, pulse quickened the brass. The record switches to a smooth old tune. She sits to kiss her cigarette leaving red lipstick rings, a barely seen sheen of glimmering sweat on her chest as she rests. The music plays on like it’s 1959. Everyone who looks my way sees just fifteen feet between me and a whiskey neat. Moving through the faceless crowd I am a slight moment of acknowledgment as they move aside, forgotten, to fade back into my chair. The Latina dances on unaware, a red bow in her hair. I swim among the beautiful youth — stolen, broken, ruined, lied to, patronized, a generation of entitled idiots inheriting a shiny empty box, too content with sleazy ease and the guiding bleat — vacant iPhone junkies looking for a facebook fix. Instant isn’t appreciated. It’s demanded. The DJ spins forgotten jewels like tiny sparkles in a sea of gray, dead voices more full of life than half the people tapping their feet, but a smoke and a lowball fakes a certain cloying camaraderie for a little while. Back to 21st-century indifference in just a few hours. My glass is empty but for Jameson’s ghost and a sense of possibility, but sunk in the dim corner I am loathe to break my shallow meditation between the music, the pen, and the smoky sweet air. I wonder if Ginsberg would have — could have — written “Howl” if there had been a big screen hanging over the bar, a flashing plasma distraction, even while muted: Newsflash — a mother kills her children, then walks free. A teenager kills his parents, then throws a party while their still-warm bodies bleed deep into their bedroom carpet… but I can look away and put my mind at ease. My glass is full again, and for all my woes I still have all ten toes. On with the show. A drop rolls off my nose — it’s hot here in old San Antone. I’m not alone in a sweating crowd full of beer-foam smiles and cigarette breath. I wipe my face and lean back. It’s easy here, black, white, brown — it’s all rock and roll and a lime in your glass, for a little class, sip or shoot and you shake your ass because it’s stupid and pointless, cathartic and cleansing, for a few hours — It's almost more than I can bear. There’s a blue-eyed beauty next to me and she’s wondering what I’m writing. Suddenly I’m nervous, but she’s got nice lips. I hand her a poem I just wrote. She nods her head and moves on. It’s just as well. If she doesn’t get that she’d only break my heart. Still, I smell her in the air as she passes, sugar and vanilla through the smoke. It would have been nice to get my poem back, but I can’t find it in me to be concerned while I sink into the sofa to the sound of Merle Haggard and watch an older couple dancing. He’s got a grimy baseball cap but his smile is bright. She’s twenty years past beautiful, but she’s a beauty tonight. The light in their faces have me smiling if only for the space of a song.
https://medium.com/literally-literary/dives-ca6e85a45eb9
['Heath ዟ']
2019-06-24 20:38:10.179000+00:00
['Poetry', 'Bars', 'Whiskey', 'Prose Poem', 'Music']
What’s the relationship between Firebase and Google Cloud?
There’s one super-important thing to know about these project containers. Since the underlying project is the same for both Firebase and GCP, if you delete the project using either Firebase or the Cloud console, you will delete everything in that container, no matter where it was configured or created. So, if you created a project with the Cloud console, then add Firebase to it, then delete the project in the Firebase console, all your Cloud data will also be deleted. An existing GCP project can be configured to add Firebase services Now let’s imagine, instead, that you’ve created a project in the Cloud console. At the outset, your project won’t have anything directly related to Firebase configured in it. After all, the Cloud console doesn’t know if you intend to build a mobile app, so why set that up? But if you have an existing Cloud project, you can very easily add Firebase to it. To add Firebase services to an existing project, go to the Firebase console, click the “add” button. When it asks for your project name, you have the opportunity to choose an existing project from the dropdown that shows your existing projects that don’t have Firebase added. When you select a project and proceed from this point, all the APIs and services that power Firebase products will be automatically enabled in your project, and you’ll be able to use the Firebase console to work with those products. If you’re wondering what exactly I mean by “APIs and services”, this is a GCP concept that’s only visible in the Cloud console. Here’s a screenshot of the APIs and services dashboard from the Cloud console after Firebase has been added to a project: Here, you can see a number of APIs (enabled by default), along with some Firebase product APIs highlighted in the red box. This detail of enabled APIs is hidden from developers in the Firebase console, because it’s not really necessary to know. However, knowledge of GCP APIs and services gains importance as an app’s backend becomes more sophisticated. For example, an app developer might want to make use of the Cloud Vision API to extract text from images captured by the device camera. And then, go further and translate the text discovered in that image using the Cloud Translation API. To use these APIs (and get billed for them), you have to enable them in the Cloud console. Once enabled, you can call them from your backend code (deployed to Cloud Functions, for example). As you dig around in each console, one thing you might notice is that the set of products you can manage in the Firebase console has three items in common with the set of products in the Cloud console. These products are Cloud Storage, Cloud Firestore, and Cloud Functions. While each product is the same at its core, regardless of where you’re viewing it, they are each organized and managed in very different ways between the Firebase console and the Cloud console. This leads me to my next point. Firebase adds SDKs, tools, and configurations to some Google Cloud products As you might guess from their names, Cloud Storage, Cloud Firestore, and Cloud Functions are Google Cloud products. Technically, they are not Firebase products, even though you can work with them in the Firebase console and manipulate them in your app using Firebase SDKs and tools. First, some quick definitions: Cloud Storage (Firebase, GCP) is a massively scalable file storage system. Cloud Firestore (Firebase, GCP) is a massively scalable realtime NoSQL database. Cloud Functions (Firebase, GCP) provides serverless compute infrastructure for event driven programming Without Firebase in the picture, these Cloud products are typically used in enterprise environments, where data and processes are mostly controlled within Google Cloud, or some other backend. To work with these products programmatically, Google Cloud provides client APIs meant for backend code, along with the command line tools gcloud and gsutil. With Firebase in the picture, these three products are enabled to work seamlessly with mobile apps by providing additional SDKs for mobile clients, additional tooling with the Firebase CLI, and a way to configure security rules to control access to data through the provided SDKs. I’ll talk about some of the specifics of these Firebase additions in future posts. (Since I mentioned Cloud IAM earlier, I should also mention that Firebase offers additional IAM roles for some Firebase products that give other members of your team granular access to those products, without the risk of them making a dangerous change elsewhere in your project.) Note that the names of these three Cloud products don’t change from a Firebase perspective. I know it’s tempting (and natural!) to say things like “Firebase Storage” and “Firebase Functions”, but these names aren’t accurate. Am I being pedantic about this? Perhaps, but you won’t find these names anywhere in formal documentation! However, you will see names like “Cloud Storage for Firebase” and “Cloud Functions for Firebase” when dealing with the Firebase add-ons for these Cloud products. OK, what’s the upshot of all this? If you’re a Firebase app developer, you probably created your project in the Firebase console. But, at some point, you might need to jump over to the Cloud console for some administrative tasks, to expand your cloud infrastructure, or make use of Cloud APIs. The Firebase console is just the beginning to build out the infrastructure of your mobile app. If you’re a Cloud infrastructure developer, and you want to build mobile or web apps against the data you’ve already stored, you’ll need to jump into the Firebase console to deal with configurations and tasks that are unique to the Firebase additions to some Cloud products. In fact, Actions on Google projects are also GCP projects (if you’re working with DialogFlow). These projects have Firebase enabled by default, so that’s another way you could end up with a new perspective on a GCP project. In any case, no matter how your project came into existence, the console you started with might not end up being the only console you use. Thinking of a project primarily as a container for services and APIs makes this transition easier. Each console is just giving you a view of those services and APIs in a different way. Read more about the differences between Firebase and Google Cloud with respect to these products:
https://medium.com/google-developers/whats-the-relationship-between-firebase-and-google-cloud-57e268a7ff6f
['Doug Stevenson']
2020-01-08 19:14:33.569000+00:00
['Google Cloud Platform', 'Software Engineering', 'Software Development', 'Firebase', 'Cloud']
Immersive Design: The Next 10 Years of Interfaces
Immersive Design: The Next 10 Years of Interfaces A look into what happens when we design beyond a screen Like many designers, I started my career as a Graphic Designer. I dealt in picas, carried Pantone books, and swore to measure twice and cut once. Then the web came along and with it came Web Designers. We had to become acquainted with HTML, CSS, Javascript and we’re still trying to keep up with the right way to build for it. These websites quickly demanded more interaction from us when Flash entered the scene and conquered our hearts. We turned our attention to animation to convey expressive user flows through interaction design. Then, the iPhone showed up and forced us to think smaller. We got excited about skeuomorphism, learned about pixel density, and made a vow to design mobile first. After a while, we tried to combine all of the above into a holistic practice that would buy us “a seat at the table,” where we could think not just about aesthetics, interactions, and user needs, but also business needs. And so, the modern Product Designer was born. I’m willing to bet that, like many designers before it, the Product Designer is approaching extinction, and setting the stage for the Immersive Designer. Virtual Reality (still) matters Over the last decade, we’ve seen content move from newsstands, to desks, to our laps, and then into our hands. It seems clear that the next step is to remove the device altogether and place the content in the world itself, eliminating the abstraction between the content and its audience. We call the process of designing for this Immersive Design, which includes VR/AR/MR/XR — basically all the Rs. We are seeing this realized today in phones through Augmented Reality. Tech giants like Apple, Google, and Samsung are rushing to conquer the AR space like a modern Christopher Columbus in search of spices. We’re seeing identity transfer setting a trend in animojis and virtual characters walk around our videos like a Roger Rabbit fever dream. However, designing for mobile Augmented Reality today feels like developing for the Commodore 64 in 1982; investing in a platform that’s novel but filled with practices that will be rendered obsolete before they’re relevant. I’ve found that Augmented Reality in 2018 has two major limitations when it comes to Immersive Design: field of view and input. Field of view So far, Augmented Reality is still restricted to a rectangle in your hands. Content can only aspire to be a window into another world; it hasn’t quite inhabited our own yet. Users feel trapped outside in the mundane world while all the fun is happening inside the phone in their hands as if A-ha’s Take on Me never made it to the first chorus. Input In 2018, we’ve developed a language for using facial gestures like opening our mouth or raising our eyebrows to control Augmented Reality masks, other experiences rely on the now primitive touch interaction, while the ambitious ones rely on voice commands to interact with the world inside the screen. The input available on the market limits these interactions. However, once we inevitably obviate the phone and achieve immersion through AR glasses, we’ll have to go back to the drawing board and try to answer the billion-dollar question: how do you interact with content in space? This is where Virtual Reality comes in. The jury is still out on whether Virtual Reality belongs in your living room or as a museum-like destination that you plan for. We’re still experimenting with the medium to find the adequate cadence for virtual experiences and navigating worlds in the much-adored metaverse. If you think about it, it took film quite a bit of time to arrive at the standard 90-minute duration, which is about as long as it takes your bladder to digest a liter of Coke at the cinema. Today, Virtual Reality has found a fit as the best way to explore Immersive Design problems present in the Augmented Reality future we crave by taking advantage of a more immersive field of view and using natural gestural interactions. Challenges like thinking in 3D and using volumetric UI that reacts to the environment and the people in it. Not only can Virtual Reality improve our quality of life by providing great escapism, but it can also open up a bunch of questions around how people could interact with technology if it were all around us. Among other things, I’ve found that Immersive Design invites designers to question the line between content and UI and rethink the process for creating digital products. Content is changing As designers we’re often told to get out of the way. To be “content first” and make room for the reason people are using your product in the first place. However, Immersive Design poses an interesting question: where does the line between content and UI start and end? Game designers have been asking themselves this question for decades. As they envision a world to be inhabited by players, the interface to navigate it can often be abstracted into menus that live outside the world’s logic. For example, the interface to start a game often lives in this weird in-between software that acknowledges the existence of the world inside the game by using the game’s characters and aesthetics, but operates based on the rules of the player’s world. And so, video game companies draw a line between UI and Game Designers. There’s logic to this decision: Game Designers are often proficient in 3D tools while UI designers generally work in 2D. This decision can sometimes lead to immersion-breaking solutions that require players to suspend their disbelief when the game reminds them they’re in a video game with video game systems. The explicit line between UI and content is tolerable in a video game, but as we step into the world of Immersive Design, we won’t necessarily have the luxury of flat menu trees that exist outside our reality. We are tasked with finding solutions for UI that follows the rules of our augmented world; where do the menus come from and how do we interact with them? As they’ve matured, video games have given us examples of how design can be woven into the environment and blur the line between content and interface.
https://uxdesign.cc/immersive-design-the-next-10-years-of-interfaces-16122cb6eae6
['Gabriel Valdivia']
2018-09-17 19:30:11.722000+00:00
['Virtual Reality', 'UI', 'User Experience', 'Design', 'UX']
Bee the Protector they Need
Our hectic lives flow so fast we forget To smell fragrant flowers or watch the sunset Suffering Polar Bears seem so far away Remember messengers & observe them today Lean in and listen — unfold stories buzz They deserve our care and attention because Honeybees are starving and they’re thirsty too They make all the food possible for me and for you Insects know the difference between life and death All littles work hard to protect their last breath Pollinators are an important relationship to re-establish Or at mealtime we’ll stare sadly at a big empty dish How unappreciative we are to have forgotten to care What happens when we can’t get food water or air? Pandemics fires floods; lives cannot depend on politicians Our sweet buzzing friends cannot survive these conditions Spray some water on an outdoor plant for the bees Plant pollen-y flowers they can eat & lush trees Learn why we owe tiny bees a tremendous debt Together bees/humans/earth need all the help we can get Michele the Trainer 2020 Sunshines of our Ocean Planet, there is nothing we can buy to remedy climate change, but we can learn and uplift others. If you want to join the conversation about climate change, I’m reading The GenZ Emergency by Dr. Reese Halter which is rich with solutions, research, references and ideas for our future (and poetic inspiration!)
https://medium.com/age-of-awareness/bee-the-protector-they-need-1417c0354461
[]
2020-11-17 19:24:45.442000+00:00
['Age Of Awareness', 'Poetry', 'Climate Change', 'Michelethetrainer', 'Sustainability']
PyTorch Lightning 1.0: From 0–600k
The Lightning DNA AI research has evolved much faster than any single framework can keep up with. The field of deep learning is constantly evolving, mostly in complexity and scale. Lightning provides a user experience designed for the world of complex model interactions while abstracting away all the distracting details of engineering such as multi-GPU and multi-TPU training, early stopping, logging, etc… Frameworks like PyTorch were designed for a time where AI research was mostly about network architectures, an nn.Module that can define the sequence of operations. VGG16 And these frameworks do an incredible job at providing all the pieces to put together extremely complex models for research or production. But as soon as models start interacting with each other, like a GAN, BERT, or an autoencoder, that paradigm breaks, and the immense flexibility, soon turns into boilerplate that is hard to maintain as a project scales. Unlike frameworks that came before, PyTorch Lightning was designed to encapsulate a collection of models interacting together, what we call deep learning systems. Lightning is built for the more complicated research and production cases of today’s world, where many models interact with each other using complicated rules. AutoEncoder system The second key principle of PyTorch Lightning is that hardware and the “science” code must be separated. Lightning evolved to harness massive compute at scale without surfacing any of those abstractions to the user. By doing this separation you gain new abilities that were not possible before such as debugging your 512 GPU job on your laptop using CPUs without needing to change your code. Lastly, Lightning was created with the vision of becoming a community-driven framework. Building good deep learning models requires a ton of expertise and small tricks that make the system work. Across the world, hundreds of incredible engineers and PhDs implement the same code over and over again. Lightning now has a growing contributor community of over 300+ of the most talented deep learning people around, that choose to allocate the same energy and do exactly the same optimizations but instead have thousands of people benefiting from their efforts.
https://medium.com/pytorch/pytorch-lightning-1-0-from-0-600k-80fc65e2fab0
['Pytorch Lightning Team']
2020-10-21 14:36:31.649000+00:00
['Machine Learning', 'Deep Learning', 'Pytorch Lightning', 'Pytorch', 'AI']
Setup a Hyperledger Fabric Host and Create a Machine Image
Overview During the training and testing on Hyperledger Fabric networks and chaincodes, I always need a “standard” host running Hyperledger Fabric. By “standard” it is nothing more than a Ubuntu host with prerequisite components (link) and Hyperledger Fabric images, tools and samples (link). They are well documented in Hyperledger Fabric documentation. While various cloud providers have their own templates to build such a host, or a more friendly way to launch the whole fabric network, I always encourage those wishing to learn Hyperledger Fabric “do the hard way”, that is, they at least should go through all the step-by-step process once. It helps you understand more about Hyperledger Fabric. Creating a host (machine) image speeds up the whole process. All cloud provider allows you first to create a compute instance. After all required components are installed and the host is functioning, a machine image can be created. This machine image can later be used for creating new host with those components well installed. That means, the host is ready for my use without going through the installation process again. Here I am using AWS as an example. By building my own Amazon Machine Image (AMI), I can easily launch an EC2 instance with all components I need. Our host will run Ubuntu 18.04, with the pre-requisite and Hyperledger Fabric related tools and images. After everything is installed, I will make it into an AMI. Finally, we will show how launch a new host with this AMI. The last step is exactly what we need to do next time when we need a new Hyperledger Fabric host. You can explore other cloud providers with similar capability. Launch an EC2 Instance Here we first select Ubuntu Server 18.04 LTS (HVM). It is the base we are going to install all components later. Choose t2.small type of instance in the next page. This is the smallest type that I can successfully install everything. Click Next and accept everything thereafter until the Security Group page. As I am using this for testing, I always “open all” for my instance. Note that in production make sure you have proper configuration on the security group based on your actual need. In the Configure Security Group , either create a new group to open all, or select a group if you have already. Here you can see my “open all” security group. Then we are ready to Review and Launch the instance. Click Launch. Select the proper key for the instance and we will access the host through SSH. Install Prerequisite components for Hyperledger Fabric The prerequisite components for Hyperledger Fabric host is shown here. In summary we are installing the following: curl (if version is not updated) docker and docker compose Go programming language Node.js runtime and NPM Python We will first access our newly launched instance using the public IP (here mine is 3.85.40.248). Use ssh to access the node ssh -i <key> ubuntu@<public_ip> As good practice, always begin with sudo apt-get update Install cURL Install or update the curl. sudo apt install curl Install docker and docker-compose While you can follow the standard way to install, I use the following command and it can install both docker and docker-compose good for our use. sudo apt-get -y install docker-compose And as standard practice on docker installation, sudo usermod -aG docker $USER Exit and login the host again and see if both docker and docker-compose are well installed. docker -v docker-compose -v Install Go Programming Language Here we use version 1.11.11 sudo tar -xvf go1.11.11.linux-amd64.tar.gz wget https://golang.org/dl/go1.11.11.linux-amd64.tar.gz sudo tar -xvf go1.11.11.linux-amd64.tar.gz export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin Add these two lines to the end of the file .bashrc, such that it will run in every login. export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin Install Node.js and NPM Per suggestion we are installing node.js version 8.x. sudo apt install nodejs curl -sL https://deb.nodesource.com/setup_8.x | sudo bash -sudo apt install nodejs node -v npm -v Install Python Ubuntu 18.02 comes with proper installation of python 2.7. Now the prerequisite is complete. We are moving to installation of Hyperledger Fabric Components. Install Hyperledger Fabric Components The Hyperledger Fabric components we are going to install are Hyperledger Fabric software docker images Hyperledger Fabric binary tools Fabric Samples The detail is shown here. Installation It is quite straightforward to install all these components with this command. It will take a while in particular for downloading the docker images. Verification To see whether the installation is complete, we check the three portions. docker images: All docker images are downloaded. We can see it is the version we specify (1.4.1) docker images fabric samples: a directory fabric-samples is downloaded. fabric binary tools: they are stored inside fabric-samples/bin directory. Finally, to see whether all installation is complete, we will use the “Bring Your First Network” scripts to bring up and tear down a sample network. It is provided inside fabric-samples. cd fabric-samples/first-network ./byfn.sh up If everything goes smooth, from the output you will first see a big START and finally a big END. Here I omit all the inside about First Network. If you are interested in the detail, check my other article (link) on what’s inside. Finally tear down the First Network. Make sure you tear things down to make a clean environment (no more containers running and all chaincode images removed). And quit the shell. ./byfn.sh down exit Create an AMI on this Host We are done with our software installation. Now we go back to AWS console and create an AMI with this host. Go back to AWS console, select the instance we are working on. From Actions select Create Image . Give a meaningful name and description for this AMI. Click Create Image . You can check the state of AMI image. Select AMI on the left panel and you can see your image. It takes some time before the image is completely created. Launch an EC2 instance with our AMI Now we first terminate our own instance (yes, to save money). Then we can go to AMI , select our newly created image and Launch an instance. You will go through the same process of launching a new instance, except that you are not choosing Ubuntu 18.02 any more. You still have to go through the instance type (t2.small), security group, before you can launch this new instance. This is our new instance, with public IP 18.234.228.77. We will SSH to this new host. To see whether things are properly installed we can check the following. // should have no running containers docker ps // hyperledger fabric docker images are all there docker images cd fabric-samples/first-network // wait till whole script complete ./byfn.sh up // tear down the first network and chaincode ./byfn.sh down Here we have the “standard” host for Hyperledger Fabric for testing and practicing. Don’t forget to terminate the host once you have done your testing. And yes, you can quickly launch another one when needed next time with this image. Summary This article showed a simple way to create a machine image for fast preparation of a Hyperledger Fabric host. We first walked through the installation of the prerequisite components and Hyperledger Fabric components on a Ubuntu host. Then we made a machine image (AMI here as I am using AWS). Finally a new EC2 instance can be launched with this AMI without installing things from the scratch again.
https://kctheservant.medium.com/setup-a-hyperledger-fabric-host-and-create-a-machine-image-682859fd58ba
['Kc Tam']
2019-06-27 03:42:24.255000+00:00
['AWS', 'Blockchain', 'Hyperledger Fabric', 'Guides And Tutorials']
How To Identify A White Knight Narcissist
NARCISSISM How To Identify A White Knight Narcissist Their motives seem altruistic, but they’re still narcissists Openclipart — CC0 1.0 Universal license Not all narcissists are easily identifiable. When we think “narcissist,” we tend to think of someone who is preening, vain, hogging the spotlight; someone who doesn’t have a care in the world for anyone but themselves. Narcissists are not capable of deep empathy, but they are very much keen observers of people. Some of them learn to hide their true selves behind a mask of altruism. Today we will take a look at White Knight narcissists, people who do quite a bit of good in the service of others — but who are still narcissists. White Knight narcissists Narcissists have a deep emptiness inside them, a void that must constantly be filled by the attention and admiration of others. This external validation that the narcissist is special and perfect is known as narcissistic supply. Elinor Greenberg, a psychologist, lecturer, and author on narcissistic disorders, coined the term “White Knight narcissist” for narcissists who spend considerable time, energy, and/or money in the service of others. Another term is “pro-social narcissist.” These people might spend time volunteering in the community. They might work for a nonprofit or other organization that puts them in the service of others. The White Knight is a person who might offer to help a neighbor with a household project, or watch their pet while they’re away. The White Knight is often thought of as a really nice, charming, generous soul. These acts, however, are how this type of narcissist draws their supply. They believe they deserve unending admiration and praise for the good deeds they perform. They mask themselves in altruism to become the center of attention whenever they get the chance. They may throw money around, donating considerable money to causes or leaving oversized tips at restaurants — but rest assured, these donations are never anonymous. They are always made with the intention of impressing someone, setting the White Knight narcissist up as someone to be admired, as a pillar of the community. White Knight narcissists are still narcissists Like all narcissists, the White Knight variety lacks object constancy, which means they are not capable of seeing a person (including themselves) as possessing both good and bad qualities at the same time. To them, a person is either all good or all bad. This can cause them to turn on you in an instant, if they feel you are not giving them the attention and admiration they believe they deserve. They can become dismissive and condescending toward you, shifting blame onto you, projecting their own worst qualities onto you. One minute they may call to invite you out to brunch, and the very next minute, they’re bashing you, trashing you, calling you the very worst names. Like all narcissists, White Knights are capable of smear campaigns, creating false rumors (often based in a smidgen of truth) to damage your reputation among your peer group. Often, the victim of such a campaign will have no idea until it is well underway. If your admiration for all that a White Knight narcissist does for you leads to a relationship with them, don’t be surprised if they become less and less helpful and generous as time goes on — at least to you. They will get bored with you and look for new sources of supply, new people to admire them for their good deeds. What to do if you find a White Knight narcissist in your midst Although you may be tempted to confront and expose the narcissist, this is generally a very bad idea, whether White Knight or other variety. Because of their fractured self-esteem, their reaction will likely be explosive. They might launch a smear campaign, take false legal action against you, or even threaten you with physical harm. It is preferable to be pleasant to the narcissist, but from a distance. It’s fine to say, “Hey, great job on that fundraising campaign,” but avoid being effusive in your praise which could lead to them seeing you as a source of supply. Keep someone you identify as a White Knight narcissist at the friendly acquaintance level. If you begin to see them as a true friend — which is easy to do because they are so charming and helpful — you may begin to open up to them, to share things that are personal, to confide in them. That is a mistake. Narcissists see that as a weakness and remember the things you tell them in case they ever need to use those things against you. Avoid electronic communications such as text messages, PMs, or emails with narcissists when possible. They will use those against you at a later date if provoked. They will show them to mutual friends to prove that you are the “crazy” one, and could even show them to the police or use them against you in court. If you work with a White Knight narcissist, again, be pleasant but be distant, and try to communicate face-to-face rather than by email whenever possible. If the White Knight narcissist is your boss, be pleasant and respectful, and if you’re lucky you’ll become one of your boss’s favorites. Even then, though, you should be cognizant that they could turn on you at any time, and be looking for ways out. Summary If you find yourself around someone who does a lot of good for their friends, neighbors, and their community, but who seems to do it as a device to get attention and admiration, you may be dealing with a White Knight narcissist. Although outwardly altruistic, White Knight narcissists are still narcissists. They can still engage in the behaviors found in other forms of narcissism. These include projection, denial, gaslighting, playing the victim, devaluing, and discarding. If you have a suspected White Knight narcissist in your peer group or office, don’t confront them. Pleasant but detached is the best air to project when interacting with them. Avoid sharing personal details of your life with them, and avoid electronic communication with them as much as possible. I hope this introduction has been helpful. See the sources below for more information, and if you would like to read more of my work, join my tribe for weekly updates. Sources Psychology Today: White Knights & Black Knights: Pro-Social & Anti-Social NPD Abuse Warrior: Traits of the White Knight Narcissist Quora: What Is a White Knight Narcissist?
https://paulryburn.medium.com/how-to-identify-a-white-knight-narcissist-8b93cf3e33b0
['Paul Ryburn']
2020-11-04 11:06:06.326000+00:00
['Psychology', 'Relationships', 'Abuse', 'Narcissism', 'Life']
An Overview of AWS Organizations
For example, let’s assume you want to deny a specific service to your organization for some reason. For our example, let’s deny access to DynamoDB. Enter a name and description for the policy, and then navigate through the service list on the left to find ‘DynamoDB’. After selecting DynamoDB, a list of actions is displayed. If you want to deny specific actions, select those, but for our example, we are going to deny all actions in DynamoDB, effectively removing an account’s ability to access the service. Before we can submit the policy, we have to select the resources to apply the policy too. In this case, we want to apply this policy to DynamoDB. We have to select DynamoDB as the service, select all of the resources, and because we are opting to deny DynamoDB completely, leave the ‘*’ in the ARN field. Once we have defined everything in our SCP, the resulting JSON looks like { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Deny", "Action": [ "dynamodb:*" ], "Resource": [ "*" ] } ] } This has the effect of denying all actions associated with DynamoDB. As mentioned previously, we should not apply this policy against the master or root account until we are sure it provides the desired result. Once we have validated the result, we can apply the SCP to the appropriate OUs or accounts. Testing the new Policy Before we apply this policy to all of our accounts, AWS recommends verifying you get the desired result before implementing it everywhere. First, let’s validate the other account can access the resource we are going to block, in this case, DynamoDB. We do this by logging into the master account, and using the switch role feature to switch to the target account. If you have never switched roles before, click on the “Switch Role” item, which displays this page: You have to enter the target account ID, and the target role, which is the “OrganizationAccountAccessRole”, which was created in the target account when the organization was defined and the target account was linked to the master account. Because I have already accessed the other account using the Switch Role function, it appears in the Role History above the Switch Role item. After you switch roles, the view in the target account matches where you were in the console in the original account. For example, because I was in AWS Organizations when I switched roles, I am in the same view after switching roles. Let’s go create a table in DynamoDB. With our table created, we can see the table details. Now, let’s go back to the Master account and apply the DenyDynamoDB policy. Before we can apply the Service Control Policy, we have to enable this feature on the root or master account. Click on “Enable” next to Service Control Policies to enable SCPs on the root account. This shows Service Control Policies are enabled, and we can add our policy to our target account by selecting the target account in the view, and then clicking on Service Control Policies on the right side of the page. After clicking on “Attach”, the SCP is attached to the account, and when we access the alternate account, we should not be able to access DynamoDB now. After switching roles, and accessing DynamoDB, we are presented with the DynamoDB Getting Started page. Click on the arrow at the top left of the page. When we click on the Tables item on the left of the page, we get an access denied message because the Service Control Policy is in place. All DynamoDB actions are denied at this point, making usage of the service for this account impossible. If this is the desired action for all of the accounts in the organization, then applying the policy to the root account will apply the policy to every account in the organization. SCP Interactions The principal thing to remember with Service Control Policies is to attach them to an Organizational Unit and not to account. By attaching the SCP to the OU, the SCP is propagated to every OU in the hierarchy. The second key point is that SCPs define the maximum permissions for the service. Even if an IAM Role or user has more extensive permissions, they will not be granted to the user or role because of the SCP. Settings The Settings view allows you to view the details for the organization, including the master account information and the trusted services configuration. Trusted services can access information about the accounts, OUs, polices for the organization. AWS recommends adding the desired trusted services to the account using the console so other tasks that must be performed to enable the access are also performed. Deleting An Account At some point, it may be time to remove an account from the organization. To do this, go to the AWS Organizations dashboard view, and select the account. At that point, you can remove the account. This is best done from the account leaving the organization, as it may be necessary for additional information such as contacts and billing information to be provided. AWS Organizations Pricing There is no cost to using AWS Organizations itself, but any resources created in the accounts associated with the organization may incur charges as defined in the pricing structures for those services. Automation If you are considering the use of AWS Organizations, it is advisable to develop either parameterized CloudFormation templates or a custom application using the SDK to add/remove accounts in the organization. The advantage to this becomes clear as you add Service Control Policies to control and limit the maximum permissions for various AWS Services, or as illustrated in this article, prevent the use of a service altogether. This way you can ensure when accounts are created, the process is consistent, they can be assigned to an OU, and the appropriate SCPs are applied immediately. AWS Control Tower This is a topic for a future article, but it needs to be mentioned in this context. Designed to assist organizations to create, manage and control a multi-account AWS environment, Control Tower helps by applying the best practices established by AWS through their experience in supporting organizations as they migrate to the cloud. According to the product webpage AWS Control Tower “can provision new AWS accounts in a few clicks, while you have peace of mind knowing your accounts conform to your company-wide policies”. I will dive into AWS Control Tower in a future article. Best Practices Here are some things to consider when getting ready to move beyond your one account and start setting up AWS Organizations. How to organize accounts You must give serious consideration to how you will get organized. There are several options: — Organize by functional domain. Just like we organize people and process your function, we could do the same for accounts. — Organize by development phase. You may want an account for developers, along with corresponding VPCs for development and testing, along with a separate account for production. This eliminates the separation of duties issues by ensuring users with access to the development account can not access the production account. — Organize based on data classification. This is another situation where you may need to create OUs for your most sensitive information, regardless of which part of the organization. What works in one situation, may not work for another. I would suggest creating OUs based upon business function. For example, finance apps and users are in the finance OU, and each OU has several accounts, one for each of the development phases, and one for production. This has the benefit of keeping the applications and data for that business unit together. For example, general ledger and related functions would be used by finance users, and typically no one else in the company. Those same functions are likely implemented in software and need to communicate with each other, by keeping them in the same OU and different accounts for development and production, you can limit who has access to this important resource. Other SCPs to consider Aside from denying specific services such as our DynamoDB example earlier in the article, you may also want to consider Service Control Policies extricating what actions can be taken for the security and IAM services. The important thing to remember is you will have to decide what is allowed and not allowed within your specific organization and it’s specific rules and legal obligations. For example, you may want to prevent users from disabling services like CloudTrail and CloudWatch, prevent configuration changes on services, prevent VPCs from getting internet access, etc. AWS has defined several scenarios and provided sample Service Control Policies to implement them. These are worth examining and applying to your organization if they make sense. Additional Best Practices Some additional best practices to consider when setting up AWS Organizations: — It is strongly recommended you do not create any resources (with one exception) in the Master account. This makes it easier to make high-quality control decisions, and make it easier to understand the charges on your AWS invoice. — The one exception is CloudTrail. You should set up CloudTrail in the Master Account so you can track all AWS usage across the member accounts. — Every account/OU should be set up to have the least privilege possible, even if you are going to further reduce the privileges using IAM. For example, users who are in the development OU should not have access to the production VPCs and OU. Even then, you may want to limit who can create, modify and delete resources. — Assign Service Control Policies to the OU rather than the accounts. This allows for a better mapping between the organizational structure and the level of access needed within AWS. — After creating a new Service Control Policy, validate operation in one account before implementing it across all of the OUs. This makes it easier to rollback should the result be something other than anticipated. — Automate the creation of accounts. Create a CloudFormation template so every new account is created and configured for the organization. This template can create the account, assign to an OU, attach Service Control Policies, create VPcs, IAM roles, etc. Just as we implement resources using infrastructure as code (e.g. CloudFormation), so should we consider the creation of an account and OU as a similar process. Conclusion AWS Organizations can be invaluable to enterprises who need multiple accounts to segregate work, and apply either unique SCPs to an account, or more specific IAM roles for specific use cases. It also provides consolidated billing through the master account, so volume discounts can be achieved which might otherwise not be attainable at the individual account level. Additionally, using multiple accounts also allows for the separation of development, test and production environments aside from VPC separation, as more granular control can be achieved at the account level through IAM roles than at the VPC level. Enabling additional trusted services, and simplifying logins to the organization through the AWS Single Sign-On (SSO) service is also possible. In conclusion, using AWS Organizations is a must for anyone needing more than a single account or VPC and is a best practice for managing the services and resources in your AWS environment. References AWS Control Tower AWS Organizations AWS Organizations Tutorials Enabling All Features in your AWS Organization Example Service Control Policies IAM JSON Policy Elements: Condition Managing the Accounts in your AWS Organization Service Control Policies Strategies for using Service Control Policies About the Author Chris is a highly-skilled Information Technology AWS Cloud, Training and Security Professional bringing cloud, security, training and process engineering leadership to simplify and deliver high-quality products. He is the co-author of more than seven books and author of more than 70 articles and book chapters in technical, management and information security publications. His extensive technology, information security, and training experience makes him a key resource who can help companies through technical challenges. Copyright This article is Copyright © 2019, Chris Hare.
https://labrlearning.medium.com/an-overview-of-aws-organizations-92689b93f4ad
['Chris Hare']
2019-10-21 10:54:19.511000+00:00
['AWS', 'Cloud', 'Governanve', 'Cloud Governance', 'Technology']
If You Liked “Oldboy,” Watch “The Handmaiden”
If You Liked “Oldboy,” Watch “The Handmaiden” Con-artistry, class warfare, and plot twists abound in this instant classic Photo by Bagus Pangestu from Pexels I have studied film and literature my entire life, film by proxy as I grew up in the San Fernando Valley, just over the hill from Hollywood, California where the movie industry was in the very air I breathed. My aunt worked for a producer at Columbia pictures for a while, and we saw first-run screenings for employees on the studio lot in a private screening room the day before a big premiere. I’ve never shed the influence of movies on my life, and when I decided to study literature and teaching, movies and movie analogies became my anchor to explore the role of story and narrative in all facets of life. I was fortunate to go to graduate school in a small midwestern town that had a converted opera house as an arts theater, adjoining the best video store I’ve ever encountered. Their collection was as deep as the Grand Canyon, and I was never disappointed. They always had or could get a copy of any film I sought, even the most hard to find titles. For a while, I watched a lot of films in languages other than English. One magnificent film I encountered was the 2003 masterpiece Oldboy, by South Korean director Chan-wook Park. It has high Shakespearean revenge tragedy themes and a gripping reality that keeps the audience on the same edge of discovery as its protagonist. So, if you haven’t seen Oldboy, step away from this review and rent Oldboy immediately. I’ll wait. Now that you’ve seen Oldboy and been blown away, take a look at Chan-wook Park’s film The Handmaiden (2016). I watched this on the heels of seeing Parasite and its historic Academy Award winning honors. Here is another film of class warfare, pitting Korean swindlers of modest means vs. a rich Japanese heiress, set in 1930s Korea during a time of Japanese occupation. The historical backdrop certainly has much to do with the motivations of the characters, but it’s the intricate story telling — and the nature of telling stories — that is what this film is all about. The complexity of plot is reminiscent of Park’s Oldboy but the complexity is heightened in The Handmaiden. In Oldboy, you are on a journey of discovery along with the main character to find out the meaning behind his being held captive for 15 years and then suddenly released to make his way toward the reasons for his captivity. While you aren’t sure what you might find, there is no inherent fear that evil-doers are following along and the character is safe on his path. In The Handmaiden, however, the Korean plotters are constantly at risk of detection, which could carry a risk of death at their Japanese occupiers’ hands. So when the gentleman openly confesses to the Japanese heiress that he and her handmaiden are plotting to take her money, we as audience are confused as to why the plot is revealed. But Park is a master of the plot twist, and this story — or rather, story enfolded within a story, enveloped by another story, and on and on — isn’t about the superficial plot at all. The deeper levels of story-telling, subjugation, and passion involved in story-telling all find their full outlet in the hidden basement. American movies rarely push the envelope of complex plots and intrigue the way Chan-wook Park does, and that’s a tribute to a master story-teller. The lushness of his sets is matched by the luxuriousness of the slow accretion of details and the intimate reveals. The passion at the center of the story has nothing to do with the passionate intensity of the handmaiden and the heiress, though there is that, too. It’s reminiscent of Blue is the Warmest Color, but only on the surface. The handling of these scenes, again, sets The Handmaiden apart from most American fare — the sex isn’t gratuitous at all, though it’s quite involved and intricately detailed as is the plot of the entire movie. The connection between the handmaiden and the heiress is the kernel of truth in the telling of the passion of the story, and we watch entranced as the plot is turned upside down, much as the men watch entranced as the heiress reads the stories her uncle has written. Who is in charge of the story? Who gets to decide the end? The brutal Japanese task-masters, or their crafty Korean subjects? Or perhaps this story is about the finer sensibilities of women set against the brutality of masculinity? I have purposely been opaque about the characters and details in The Handmaiden as to not give anything away. This is a journey into a time and place and into the very nature of story-telling that will enliven your spirit and awaken your senses. It’s less about good guys and bad guys and who to root for, the way that we align American sympathies in American cinema, than it is about who gets to tell the story. And who gets to finish it. I hope you enjoy your journey in The Handmaiden. And I hope you can find your way out again.
https://lee-hornbrook.medium.com/if-you-liked-oldboy-watch-the-handmaiden-21d7add13a06
['Lee G. Hornbrook']
2020-04-02 14:52:16.982000+00:00
['Entertainment', 'Film', 'Movies', 'Creativity', 'Movie Review']
Creating ‘Qalam’
Each year, at the onset of Ramadan and for the festival of Eid al-Fitr at its conclusion, people send friends and families themed messages as greetings. To help them celebrate these occasions in 2018, Google Brand Studio conceived the ‘Qalam’ project: a series of artworks by renowned regional artists, created in VR, that could be customised and shared with friends and family. Google approached Stink Studios to build the project and help bring the experience to life. The project began with a series of art capture sessions held in Dubai, London and Turkey. Here, a total of nine artists — whose backgrounds ranged from graffiti artist, through traditional calligraphers, to tattooists — created individual artworks in Tilt Brush, Google’s VR Painting application. These intricate and beautiful artworks then needed to be recreated and made available on mobile platforms for people to customise and share. This is where the challenge for Stink Studios began. These 3D artworks were composed of: Up to half a million vertices Files up to 42MB in size Complex, custom shaders for unusual brush strokes like Fire, Electricity and ‘Chromatic Wave’ With close to 50 artworks to convert and make available in a streamlined mobile experience, we faced significant challenges on multiple fronts. Anatomy of a Tilt Brush brush stroke Tilt Brush is an amazing VR painting application which, conveniently for us, offers an export path from its proprietary .tilt file format to the common .fbx format as well as Base64 encoded JSON. Opening up the .fbx file revealed that the brush stroke geometry is exported with the vertices ordered in the sequence they were created — with brush colour stored in vertex colours and the particular brush textures cleverly created by UV unwrapping the length of the geometry against compact brush texture map tiles that define the shape of the brush start and end. This information gave us confidence that we had almost all the elements needed to recreate the brush strokes outside of Tilt Brush. However, the one component that was missing was the specific shader implementation that gave each brush its particular material properties. The look of the brush shaders was a key characteristic of the Tilt Brush art but unfortunately, given we were missing this information from the fbx file, it was going to be very difficult to recreate.
https://stinkstudios.medium.com/creating-qalam-d016a0a52d56
['Stink Studios']
2018-06-25 19:24:31.223000+00:00
['Ramadan', 'VR', 'Google', 'Threejs', 'Webgl']
Tested Hair Therapy For The Strong Hairs
REASONS OF HAIR LOSS The medicine you use, toxic substances, food additives, bad nutrition, bad shampoo, hair dyes, hair conditioners affect hair loss. Health problems also accelerate hair loss. Thyroid insufficiency, some skin diseases, Fungal diseases are some examples. Also, the deficiency of iron, biotin, zinc, vitamin D, omega 3, B12, and selenium in the body are the problems of losing hair. Moreover, perming and blow-drying damages the hair. Washing with extra hot water every day and using more than two shampoos per wash will also damage the hair. Frequent swimming in the sea and the pool will damage hair. Air pollution, exhaust gases, and dust in the air also wear out the hair.
https://medium.com/in-fitness-and-in-health/tested-hair-therapy-for-the-strong-hairs-5988cb75da17
['Onur Inanc']
2020-12-28 21:59:58.488000+00:00
['Hair Care', 'Healthcare', 'Life', 'Hair', 'Health']
Why using Facebook in 2020 means supporting Trump
Mark Zuckerberg has chosen his side, and if you don’t quit the platform, knowing everything we do now, you’re complicit. It’s been four years since Facebook, the world’s most popular social network and owner of Instagram and Whatsapp, completely failed to regulate the intentional use of its platforms to spread disinformation, a key factor in the election of Donald Trump. This was followed by a rash of promises to make Facebook a more open, ethical, and safer social network. Today, though, as we approach another Presidential election, Facebook has not only failed to reform, but has openly embraced its role as enabler of Trump and those like him around the world. The reason that Facebook hasn’t changed is because users haven’t held it responsible. There’s been no large-scale shift from any of their platforms. By now, it’s clear that the post-election promises were Mark Zuckerberg paying lip service to ethics while accepting unregulated political spending ad from the far-right, while also letting engagement-driving disinformation continue to spread. The lack of meaningful action since 2016 makes one thing clear. Using Facebook now means openly supporting the re-election of Trump, because it’s what the platform, and Trump, both want. But it’s not Zuckerberg we should blame. It’s us, the nearly 230 million Facebook users in the United States. Here’s why. The case against Facebook It’s remarkable how little Facebook has changed since 2016. It’s made token efforts to ramp up fact checking with media outlets, only to have several partners leave because the company ignored their concerns and failed to use their expertise to combat misinformation. They’ve run massive ad campaigns claiming that “from now on, Facebook will do more to keep you safe” while, in fact, disinformation, hate, and harassment continued. Zuckerberg even went to Congress, where he apologized and promised change. We’re still waiting. The past four years have been a series of false promises. The latest faux-plan is to create an Oversight Board that would supposedly act like a Facebook Supreme Court, but has no clear guidelines for jurisdiction. Many believe it’s another fig leaf to critics, meant to show action but in reality do it little. Conveniently, this board won’t even get started until after election day. The key problem is leadership — of which, at Facebook, there is just one. Zuckerberg is the CEO, Chairman of the Board, and he controls the majority voting block of all shareholders. He is the dictator of Facebook. Is it any surprise that a platform run dictatorially is enabling dictatorial-minded leaders not only in the US, but also in the Philippines, India, and Brazil? Few corporations are truly democratic, but most have at least some checks and balances on CEO power. Shareholder resolutions have forced other tech giants, like Amazon and Google, to take action on user privacy. Other companies have independent boards that can, if necessary, remove a rogue CEO, as happened last year with WeWork. But because of Zuckerberg’s dual-role and total control of voting power, there’s no check whatsoever at Facebook. Only Zuckerberg can stop Zuckerberg. By now, it’s clear that his only motivation is growth and profits. Facebook has little incentive to police disinformation on its platform because viral disinformation is the addictive material that keeps so many people glued to its platform. Evidence shows that disinformation performs better on Facebook than factual news stories, and also generates considerable revenue. As state actors in China, Russia, Saudi Arabia, and elsewhere join the disinformation game, it’s increasingly backed by money, too. As the 2016 election showed, Facebook provides a captive audience, ripe for manipulation, all around the world. It’s time to blame users too Facebook hasn’t changed because we haven’t changed. Despite the seemingly non-stop scandals and constant betrayal of user trust and privacy — RussiaGate, Cambridge Analytica, inciting anti-Rohingya violence — Facebook’s user numbers haven’t dropped, and its ad revenue and stock price have continued to grow. Credit: Author Other social networks, at least, try. Twitter has its issues, but they’re done a lot more to tackle disinformation than Facebook. Researchers have access to data and can monitor and track bot networks — something that’s impossible to do on Facebook. They’ve curtailed the ability of state media actors to buy ads, and have even hidden some of Trump’s tweets — content that Facebook still won’t restrict. They’ve also been pro-actively taking down bot networks, while Facebook usually is reactive. YouTube, too has taken similar measures. The irony is that Facebook is going to be rewarded by its inaction — with millions in ad spending that YouTube and Twitter won’t accept, especially around this fall’s election, targeting you, your family members, and others with disinformation meant to skew and, perhaps, destroy democracy. There’s a political issue at play too. Calls are growing for government regulation of Facebook. Elizabeth Warren, during her Presidential campaign, called for breaking up Facebook and regulating it so “that Russia — or any other foreign power — can’t use Facebook or any other form of social media to influence our elections.” In Asia, countries like Indonesia, The Philippines and Thailand are passing digital taxes that would force companies like Facebook to finally pay their fair share to state coffers — something it has cleverly avoided thus far. Who is standing behind Facebook? The man who the platform helped elect. Trump has made no moves to regulate tech companies, investigate their monopolistic power, and he’s threatening to launch another trade war against countries that try to tax US digital giants. Zuckerberg’s platform won’t police Trump because Trump’s policies are beneficial to Facebook. A Trump presidency likely means four more years of inaction on regulation — exactly what Zuckerberg wants. Yes, there is an ad boycott now, but is it mostly toothless. For now, it seems that whatever Facebook would lose from Unilever, Patagonia, Coca-Cola and others does not make up for that it might lose if it really cut disinformation and hate content (and lost users), or had to accept state regulation. It’s a bigger problem than just political ads and hate speech content. Senators are concerned about the spread of climate change disinformation (backed by big money) on Facebook, which isn’t being fact-checked or monitored at all. When Anne Borden King was diagnosed with cancer, her Facebook feed became full of alternative care ads promoting dangerous pseudoscience. Anti-vax disinformation has been spreading for years, including as, you guessed it, ads. Disinformation = profits for Facebook. The only power over Zuckerberg is us. Quitting Facebook, Instagram, and Whatsapp is the only way to impact Facebook’s share price. There’s no oversight, no accountability. Are we willingly going to let one man control 25–30% of all internet traffic in the US? Determine what content we get to see? And, perhaps, destroy democracy. Alternatives I’m tired of hearing about how you won’t be able to keep in touch with your aunt anymore without Facebook. Your laziness is not an excuse anymore. We live in an interconnected age, with an unprecedented array of digital communication tools. There are other ways to keep in touch with distant family members, high school friends, former co-workers, or other quasi-friends besides Facebook. I closed my personal Facebook profile back in 2011 because I didn’t like how the social network was affecting my mental health while in graduate school. I was able to keep in touch with nearly all my friends, in many cases, through more meaningful ways like phone calls. It wasn’t a clean break. When I became a freelance journalist, I set up a Facebook author page, to share articles I wrote and shortly thereafter, a page for my Chili Pepper Project, along with an Instagram profile. I was also an active use of Whatsapp, having joined before the platform was acquired (and ruined) by Facebook. No more. Earlier this year, I closed my fan pages, deleted my Facebook account, deactivated my Instagram, and, after migrating most of my contacts to more secure chat apps, deleted Whatsapp from my phone. I am now completely free of the Facebook empire, for the first time since I joined the platform as a junior in College, in 2004. Two years ago, I wrote a guide on how to #QuitFacebook, and alternatives to more safety, securely, and ethically keep in touch with friends and family. It’s really not that hard — yes, it requires effort, but isn’t that what relationships are all about? Here are a few. Slack for communication in workplaces or within teams — instead of Facebook groups. Chat apps like Telegram and Signal for small groups instead of Messenger. Dropbox, Photobucket, Unsee or Cluster for photo sharing. Pocket, SmartNews and Pulse for news instead of Facebook’s NewsFeed. Please feel free to share other alternatives in the comments. The Time to Quit is Now In 2016, us Facebook users could use ignorance as a defense. We were unaware of how Facebook was being used by Trump and his backers, both within and outside the country, to win the election. We also could take the promises by Zuckerberg to fix the platform at his word. That is not possible anymore. Facebook hasn’t changed, and Zuckerberg has chosen a side, quietly, alongside Trump. Now it’s our time to pick a side too. 2020 is the year of taking stands. All around the country, people are standing up against police violence, against fascism, and against the racist man in the White House. Meanwhile, the top shared posts on Facebook are critical of Black Lives Matter, and oh, and by the way, the company won’t even try to regulate election disinformation, perhaps because Trump is spending millions every week on ads. For Zuckerberg and investors to take notice, our shift must be massive. Despite dozens of brands pledging to temporarily halt advertising, there’s been no response from Facebook. Zuckerberg knows that if he were to accede to demands from ethical brands, he would lose out on far more potential revenue from Trump, dark-money PACS, Chinese and Russian state media outlets, and other nefarious actors. Now we can say it clear — Facebook is enabling racist, fascist ideology to spread. If you can’t quit the platform, you need to accept that you’re supporting Donald Trump. There’s no line anymore. Zuckerberg wants Trump to win to preserve his social media monopoly. The company he controls, Facebook, has enabled this dangerous Presidency and is destroying democracy for profit and control of our digital lives. Either you’re part of the movement to save America. Or you’re willing to accept your responsibility in allowing disinformation, violence, and more to spread on a platform that is clearly harming society. Your choice.
https://medium.com/scientya/using-facebook-in-2020-means-supporting-trump-8311b40e254d
['Nithin Coca']
2020-08-10 05:05:41.840000+00:00
['Facebook', 'Trump', 'Social Media', '2020 Presidential Race', 'Election 2020']
A Visual Guide to Graph Neural Network
A Visual Guide to Graph Neural Network Everything you need to know to start working with GNN Photo by Isaac Smith on Unsplash A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes also referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph. A Visual Representation of Graph. Graph Neural Network The main idea here is we encode the nodes which are elements and the edges which are the relationships among those elements. How to decides what it does? Well, that depends on your problem. It could be a social network, molecular structures or connecting roads in case of Google maps. So the first part of solving problems using GNNs’ is basically designing a graph representation of your problem. Secondly you for each node you need some information to start with. This information is encoded in the distributed vector presentation or embeddings and this is encoded for each node. It could be any type of information that you want to pass. A general diagram describing the working of a Graph Neural Network Now given above is a general representation of a graph neural network. The output is going to be the same graph but each node would have a different vector representation. The output vector representation would contain how different nodes belong within the context of the graph. This output is then passed to the specific task you are trying to solve. Remember, the initial vector representation is created by humans using the feature engineering (old fashioned ML). Graph neural network incorporates a technique called Neural Message Passing. This helps us understand how each node sends and receive inputs from its neighbours. Let’s take a sneak peek at inside the Neural Message Passing. Inside the Graph Neural Network. So from the above diagram, you can see that the value of final node F is dependent on the value at current node state as well as information coming from neighbours( that connect) related to their current node state. How does the whole network work? There is a clock-like one in CPU at each cycle a clock tick and when it ticks every node gets inputs to from their neighbour, computes messages and update its state. The number of time we want to call this cycle is a hybrid parameter( a design decision). There is no notion of convergence in graph neural network. All nodes in the network are updated simultaneously ( in parallel) Below is a visual representation of how the message from one node pass-through nth node with time t. How message passing occurs as time passes You can inspect from the above diagram how neural message passing occurs. Initially, each node knows about itself then with time about its neighbours and then its neighbours’ neighbour. So in starting we had some information which we feed to GNN and after GNN processing you get to know about the node how it is positioned inside the network and its relationships with other nodes and now you can do anything with the output vector as per your need in your application. References:
https://towardsdatascience.com/a-visual-guide-to-graph-neural-network-fcda66fff3e1
['Parth Chokhra']
2020-11-05 02:57:19.423000+00:00
['Machine Learning', 'Neural Networks', 'Deep Learning', 'AI', 'Data Science']
Getting started with JSON(JavaScript Object Notation)
JSON( JavaScript Object Notation) is a storage format that is completely language independent and it is used to store and transport data. It’s quite an important topic as the data which we fetch from external API usually consists of Arrays of elements which are in JSON format. The syntax of JSON is quite similar to the Object literal syntax which also consists of name, value pairs. But here both names, as well as values, are quoted in inverted commas. Let us look at the example below : //Object literals syntax let details = { firstName : "John", lastName : "Adams", age : 27 } //JSON syntax { "firstName" : "Mike", "lastName" : "Bush", "age" : 25 } It is believed that in previous years XML format was widely used which has tags surrounding the data. The above mention JSON data in XML format is represented as below <details> <firstName>Mike</firstName> <lastName>Bush</lastName> <age>25</age> </details> As you can see the XML format is verbose as compare to JSON i.e. for a single value “Mike”, the name “firstName” is repeated twice for the opening and closing tags which is quite unnecessary. Also, the JSON can be parsed to Object literal which makes it faster to work with. JSON is so popular that even JavaScript understands it and it has inbuilt functions to convert from JSON to object literal and vice versa. Javascript provides JSON.stringify() a method to convert data from object literal format to JSON format const objectData = { firstName : "Mike", lastName : "Bush" } const JSONdata = JSON.stringify(objectData) console.log(JSONdata) const JSONdata = '{ "firstName" : "Mike", "lastName" : "Bush"}'; const ObjectData = JSON.parse(JSONdata) console.log(ObjectData) Output:- {"firstName":"Mike","lastName":"Bush"} There is another method called JSON.parse() which converts JSON format data to Object literal format const JSONdata = '{ "firstName" : "Mike", "lastName" : "Bush"}'; const ObjectData = JSON.parse(JSONdata) console.log(ObjectData) Output:- {firstName: "Mike", lastName: "Bush"} This is what I learned when I started with JSON. Understanding fundamentals and the methods of JSON are important as they are an elemental part of accessing information with API. Conclusion : JSON syntax is similar to Object literal where both name-value pair is in inverted commas. JSON.stringify() Object >>JSON Object >>JSON JSON.parse() JSON >> Object Thank you for your time for reading the article, give a follow on my twitter as I am documenting my learning
https://medium.com/analytics-vidhya/getting-started-with-json-javascript-object-notation-38b7a9e0380
['Sarvesh Kadam']
2020-11-03 12:41:41.386000+00:00
['Json', 'JavaScript', 'Web Development', 'Development', 'Javascript Tips']
‘It’s Because You’re Fat’ — And Other Lies My Doctors Told Me
B y age 18, my knees hurt. I didn’t know why, and they didn’t hurt a lot, but they did hurt a bit most of the time. As someone who took a lot of dance classes and played my share of netball, it was annoying, but not something I thought much about. After all, I reckoned, bad knees run in my family. But by age 20, the pain had gone from a bit annoying to definitely annoying. I decided, for the first time, to see a doctor about it. She was a brisk woman with close-cropped grey hair, who glanced at me and told me my knee pain was due to early-onset arthritis as a result of my being overweight. My blood tests were negative for rheumatoid arthritis — but that didn’t matter, she told me. The only way to stop my pain from getting worse was by losing weight. So with the resigned sigh of anyone who has grown up fat, I accepted my fate. I was arthritic, at 20. By 22, things were worse. My knees had gone from hurting a bit most of the time to spontaneously collapsing in blinding pain while I was doing innocuous activities like walking down the street. I went back to the doctor — a different one, because I just saw whoever was available at the student clinic. He asked me about my pre-existing medical conditions. I explained that my arthritis was a result of being overweight. He looked at me incredulously. “That’s not a thing.” No one gets non-rheumatoid arthritis in their twenties as a result of being overweight, he explained. With the resigned sigh of anyone who has grown up fat, I accepted my fate. Instead, he decided we should figure out exactly why my knees were spontaneously collapsing. He sent me for an MRI, and I had a consultation with a specialist surgeon. “Patellae chondromalacia,” the surgeon declared. He showed me the shadows on my scan, which indicated rough patches on my knee caps. It was probably hereditary, exacerbated by my weight. “Okay,” I said. “So what can I do about it?” “You’re just going to have to manage the pain,” he explained. “And once it gets to be too much, you’re going to need your knees replaced. And that will probably be before you’re 30.” Resigned, I accepted my diagnosis. I said goodbye to yoga and dance, which aggravated the condition, and started wondering about how much two new knees might cost, and how I’d get around on crutches. I had eight years before I turned 30; it felt a bit like a death sentence. At 24, my new housemate decided she was joining our local gym, and in a moment of optimism, I decided to go with her. This gym offered a free short session with one of their personal trainers to help newbies learn the ropes. “I’ll put you with Hao,” the receptionist said. “He’s got a physio background; he’s good with injuries.” Hao was intimidating — really tall, super buff, thick Chinese accent that was hard to understand at first. “It says here you’ve got an injury,” he told me. “What is it?” “I’ve got patellae chondromalacia in both knees,” I replied. “It’s-“ “Oh that,” he said, interrupting me. “I can fix that.” What? Hao explained to me that what I had was a pretty standard sporting injury that is usually treated successfully using exercise — a fact that none of my doctors had mentioned. I’d probably injured myself as a result of all that dance and netball I did as a teenager, and it might have been exacerbated by my family history of dodgy knees. It’s normally caught early and treated early — it’s very rare for it to get to the point of causing knees to collapse, but that can happen in serious cases with no treatment. “Work with me for 10 sessions,” said Hao. “If you don’t notice a difference, I’ll give you your money back.” Well, after 10 sessions I noticed a pretty significant difference. After six months, the pain that had plagued me for six years was entirely gone. I can’t help but think that there’s a whole lot of physical pain I could have avoided if any of the medical professionals I saw had considered the fact that I might have a sporting injury. And I can’t help but wonder if the reason they didn’t has to do with my weight. When doctors looked at me, they didn’t see a girl who danced, cycled, and played team sports. They saw a fat girl — and they based their diagnosis on stereotypes about what that meant. I’m 29 now, and my knees no longer hurt. I don’t need them replaced — but if I’d listened to the weight-prejudiced opinions of my doctors, I might have. This story is hardly unique. Research shows that doctors have less respect for patients with higher body-mass indexes (BMI), which can lower the quality of care those patients receive. As one study put it: “Many healthcare providers hold strong negative attitudes and stereotypes about people with obesity. There is considerable evidence that such attitudes influence person-perceptions, judgment, interpersonal behaviour and decision-making. These attitudes may impact the care they provide.” Troublingly, many of the ideas that doctors have about fat patients aren’t even grounded in medical fact. Indeed, too often it’s forgotten that the science around weight loss and health isn’t all that settled. Does excess weight cause you to live a shorter life? Maybe, maybe not. Countless studies by BMI category have found that overweight people actually have lower rates of all-cause mortality than normal weight people. When doctors looked at me, they didn’t see a girl who danced, cycled, and played team sports. They saw a fat girl. Some researchers think that if you adjust for the increased risks caused by weight cycling (a.k.a. yo-yo dieting) and dangerous weight-loss drugs, you’d find the same mortality rates for normal, overweight, and obese people — yes, even very obese people. And even without the adjustments, the increased risk for very obese people is only small — not the “you’ll be dead before you’re 30” nonsense often pedalled by purveyors of weight-loss surgeries. What about serious disease? There’s certainly a correlation between being overweight and some diseases, but multiple studies suggest that the weight might actually be a symptom rather than a cause. Then there’s the idea that excess tissue “strains” the body. Eminent obesity researcher Dr. Paul Ernsberger has been quoted as saying, “The idea that fat strains the heart has no scientific basis. As far as I can tell, the idea comes from diet books, not scientific books . . . Unfortunately, some doctors read diet books.” What about dieting? Well, there actually is some scientific consensus there — diets don’t lead to lasting weight loss. Not even if you call them lifestyle changes. After an extensive metastudy of diet and weight loss studies, Dr Traci Mann concluded, “The benefits of dieting are simply too small and the potential harms of dieting are too large for it to be recommended as a safe and effective treatment for obesity.” So why, then, do doctors insist on prescribing diets and weight loss as a treatment for anything and everything? Sarah, 29 from Newcastle, Australia, had the misfortune of breaking both legs as a teenager, the result of a freak accident involving her legs falling asleep and then getting twisted to the point of breaking. Not long after learning how to walk again, she was involved in a serious car accident that left her with further damage to her legs. “I’m accident-prone,” she laughs. The multiple injuries have left Sarah with a build up of scar tissue that can make walking painful. But when she went to the doctor, her pain was blamed on her weight. “My weight is a factor in the healing process,” she says, “But it wasn’t the cause of my injuries — and I’ve got police reports, x-rays, and specialist reports to prove it.” Why do doctors insist on prescribing diets and weight loss as a treatment for anything and everything? Sarah changed doctors recently, and her new doctor decided to do a full medical history, checking the notes from all the physicians Sarah has seen. What she found shocked her. “She said there’s no record of my injuries with most of my previous doctors,” Sarah said. “They all had written that my leg pain was caused solely by my weight, and that meant I wasn’t getting any useful treatment for the pain. They just told me to diet.” Sarah’s new doctor promptly started her on a physical treatment plan designed for someone with compound injuries and severe internal scarring. The difference has been immediate. “Within two weeks I could walk nearly five kilometers. Before I started the treatment, I could only manage one kilometer or less before my knees were so swollen and painful that I couldn’t keep going,” said Sarah. “Getting actual treatment for my injuries, rather than just being told to lose weight and see what happens, has changed everything.” Just to be clear, I’m not saying that eating healthily and exercising aren’t good for you. The problem is when doctors prescribe diets and weight loss to patients without fully considering their symptoms and other treatment options. Stigmatization may also, problematically, stop fat people from seeking out medical care in the first place. “I just don’t go to the doctor,” says Anita, a 28-year-old advertising executive. The last time Anita saw a doctor, it was a routine visit to discuss vaccinations and anti-malarial medication for an upcoming overseas trip. The doctor prescribed the vaccines, and asked a nurse to administer the jabs. It was the nurse who decided Anita had diabetes — without having spoken to her, or seeing anything pertaining to her medical history. “He kept saying I would get a discount on the vaccines if I registered my diabetes,” Anita explained. “I haven’t got diabetes, but he wouldn’t listen. His whole attitude was like, ‘you know you’re fat, right?’ Um, yeah, I’ve noticed that, actually. Just give me the jabs.” The experience was pretty upsetting, and left Anita firmer in her resolve to avoid doctors wherever possible. Still, Anita, Sarah, and I are relatively lucky; our experiences have caused us pain and humiliation, but no permanent damage. This is not true for everyone. First Do No Harm is a website that chronicles the experiences of fat people with medical professionals — and it’s filled with harrowing stories. One woman lost a lot of weight suddenly and was praised for it — with doctors missing the fact that it was a sign of the cancer that shortly killed her. A man vomited constantly due to MS, but instead of viewing that as a medical red flag, doctors simply celebrated the 120-pound weight loss it caused. The vomiting led to permanent nerve damage, back pain, and tooth decay . A woman had an emergency-doctor declare that she didn’t need treatment for abdominal swelling after a serious car accident because she was just fat. She nearly died. A woman went years just being told to lose weight to address her ongoing, multiple health problems. It turns out she has a rare neurological disorder; the diagnosis delay has led to permanent brain damage. There’s another trove of awful stories on fat prejudice here. And of course Google’s got plenty more. A consistent narrative runs throughout these stories. Hormonal problems? Lose weight. Broken finger? Lose weight. Migraines? Lose weight. Losing weight is the consistent — sometimes only — treatment offered for every ailment imaginable. Losing weight is the consistent — sometimes only — treatment offered for every ailment imaginable. For many, changing the narrative around weight is literally a matter of life or death. So what can be done to address the problem? The good news is that there’s some recognition within the medical profession that this is a serious issue which must be addressed. It’s been noted that medical students don’t receive nearly enough training on obesity, and efforts are beginning to try to change that. Researchers are also working on empathy programs and raising awareness about the impact of implicit bias against patients. All of this is a promising start. At the same time, we can all become our own health advocates. If you’re a fat person, or someone you care about is a fat person, you can develop your critical thinking skills and challenge the classic “just lose weight” prescription if it doesn’t seem to fit the symptoms. This isn’t easy. There’s an implicit power imbalance between patient and doctor that makes challenging their statements very difficult. By working to become experts in our own health and our own situation, we stand a better chance of being able to call out something that doesn’t feel right. There’s an implicit power imbalance between patient and doctor that makes challenging their statements very difficult. Doctors are highly educated people, but they’re subject to the same biases as the rest of us, and many of them don’t stay up to date with the latest research. That’s not good enough. If obesity really is a major health concern, it’s essential that doctors stay educated on recent studies and metastudies that look at how to get the best outcomes for fat patients. If doctors really do care about their patients, they need to start looking at the overall picture of a person’s health, not simply the size of their body. Most of all, doctors need to stop prescribing a treatment that’s proven not to work for conditions that don’t warrant that treatment in the first place. The medical profession needs to step up. It needs to accept that diets aren’t the universal treatment option for fat people. It needs to accept that fatness isn’t the universal cause of ill health in fat people. It needs to engage with the very real damage caused by its attitudes toward fat people, and with the sub-standard care delivered to many people as a result of their size. It’s not exaggerating to say that lives depend on it.
https://medium.com/the-establishment/just-lose-weight-and-other-lies-my-doctors-told-me-16e71dddb836
['Martina Donkers']
2017-09-20 19:29:34.991000+00:00
['Health', 'Weight Loss', 'Brain Body', 'Doctors', 'Medicine']
What Does Justice for Breonna Taylor Look Like?
There is a common refrain uttered wide and far in the aftermath of every tragic death at the hands of police: Justice. Justice for George Floyd, Breonna Taylor, Rayshard Brooks, William Green. When news broke a few days ago that the Kentucky court would not indict Breonna Taylor’s murderers for her death, no doubt there was a lack of justice. But what kind of justice were we really searching for in a grand jury indictment, anyway? What does justice for police murders look like? Presumably, as Jalen Rose had demanded yesterday on ESPN, it is to see the four cops arrested and charged with murder of the first degree. As a measure of “justice,” the same had been argued for the cop who killed Rayshard Brooks, George Floyd, and William Green. In all of these cases, then, what justice ultimately means is punishment — lock the killer cops up, make them suffer for what they did. But is this justice, really? Is there justice in exchanging the loss of life with a prison sentence? Who does the affirmation of incarceration as a method of justice actually serve? Such an equivalence does not serve those who are most heavily exposed to state violence. Rather, to equate the injustice of the loss of life with the “justice” of incarceration only serves those who are tending to the cells. That is to say, it legitimizes the very logic of “justice = punishment” responsible for injustice in the first place. As hard as it is to accept, jailing police officers will not bring justice for the slain victims of state violence. It will only reproduce the problem. Concerning “what to do” with those cops responsible for these murders, I must confess I do not really know. Certainly, admittance back into society without any process of reconciliation or accountability is not acceptable. This would only serve to uphold the very unjust status quo, i.e. the wanton, state-sanctioned killing of Black people. But simply locking them up to punish them is, in my opinion, just as unacceptable. All I do know is that these calls coming from all corners of the country to jail the cops are not really calls for “justice.” They are merely the repetition of the same punitive and carceral logics which have for centuries now criminalized Blackness and poverty. “Justice” never takes the form of a jailhouse. On police and prison abolition Of course, I am not the only one who thinks this way. Many Black-led violence police and prison abolition groups operate on the basis of dismantling penal and carceral logics of justice (see The Marshall Project for a depository of helpful information on these topics). These groups view the problem of state violence and criminalization of Blackness at a far more systemic level, according to which addressing the problem entails truly radical reorganizations of our relations to one another. Rather than see jailing cops as a measure of justice for Black lives, police and prison abolitionists work to completely rework the problem of violence and conflict arbitration from its very foundation. For them, there is no equivalency between “justice” and “punishment;” indeed, the very notion that there can be is in itself unjust. Justice is not a question of punishment but transformation of the conditions of injustice. Often police and prison abolition is (ignorantly) ridiculed for being naïve and shortsighted, for e.g. having no clue what to do with “rapists and murderers” in the absence of prisons. There are two simple responses to this charge. For one, “rapes and murders” are still occurring even with mass incarceration and heavily funded police, so at the very least such a dismissal suffers from a severe lack of imagining better ways of preventing the injustices it claims to be concerned with. More often than not, charges of naïvety are simply a veil for complacency with a system which allows such abhorrent violences to continue. But, more importantly, police and prison abolitionists do take seriously the question of “what to do about violence.” Non-punitive and recuperative conflict arbitration is part and parcel of the very theory of abolition. No abolitionist is blind to the fact that violence will still occur in the absence of policing and prisons (although, some would say on a considerably reduced scale), but the point is to shift our perspective to the root causes of said violence. It is of course true that we can’t prevent every single accident of violence — all abolitionists are saying is that we can do a far better job of dismantling the institutions themselves responsible for begetting a great deal of violence. Thus, the only form of justice for Breonna Taylor worth its name is one which is truly transformative of the mechanisms of state violence which caused her death. Responsibility for her murder, that is to say, does not stop at the individual officers who killed her — it corresponds to an entire structure of power vested in the punitive and carceral logics which legitimize and reproduce anti-Black violence. There is no justice without abolition of that structure of power.
https://medium.com/discourse/should-justice-for-breonna-taylor-look-the-same-as-the-state-violence-which-killed-her-74b6487f99d5
['Aidan Hess']
2020-09-25 20:37:27.358000+00:00
['Society', 'Equality', 'Justice', 'Race', 'Racism']
MYOB launches integrated cloud payroll solution for larger businesses
Cloud accounting provider MYOB is investing heavily in the future, telling the crowd gathered at its Incite event in Sydney yesterday that it will spend $60 million on research and development in 2017, with artificial intelligence and the Internet of Things among its focus areas. Also part of MYOB’s future focus are bigger businesses, and with this the organisation has launched Advanced People, a new cloud payroll solution designed to help mid-market businesses manage their payments. The new solution enables users to manage various systems and processes on one platform, also allowing employers to tailor the system to their needs. Users themselves are able to customise their own personalised dashboards and enter live data outside the platform via Odata functionality. By unifying data entry into one place, the platform is able to reduce the strain of using multiple databases and resources to curate data, allowing businesses to increase their efficiency. Speaking about the launch, Andrew Birch, MYOB’s General Manager of Industry Solutions, said the cloud-based system, combined with customisable features, allows Advanced People to tackle the “complex scenarios” experienced by larger scale businesses. “We’re seeing a high demand in the marketplace for cloud-based solutions. Cloud is the future of Enterprise Resource Planning [ERP] and payroll,” said Birch. With MYOB traditionally seen as a solution for small business, the company has been making a concerted effort behind the scenes over the last few years to reach bigger businesses. Tim Reed, CEO of MYOB, said that while SMBs have been proud to use the company’s products, they eventually grew too big and existing solutions couldn’t keep up; “it was like a badge of honour to outgrow MYOB”, he said. Now, MYOB wants to get them back. Advanced People is built on the same platform as MYOB’s existing ERP solution, Advanced Business, having integrated its ERP features into one unified space. Focusing on automation, the merged platform holds MYOB’s first Data Calculation Engine (DCE), an algorithm which calculates employee tax and superannuation automatically. MYOB’s PaySuper, as well as a shelf of third-party tools, can be plugged into the platform in order to help simplify data entry and visualise it; amongst other functions. As a mid-market platform, users are able to choose what tier of Advanced People they want to pay for month-by-month, allowing business to accommodate for fluctuations in their payroll requirements. The announcement comes after MYOB finished up the end of their DevelopHER program at the end of last month. Looking to maintain diversity in the workplace, the paid internship program, which teaches women to code, ended with the hiring of its three participants. The bid to improve diversity within the company came as MYOB realised it couldn’t develop relevant products if it was missing out on the perspective of half its user-base. This thought was brought forward yesterday, with Birch saying the housing of MYOB’s development and product teams in Australia and New Zealand means the company has a solid understanding of local regulations and the needs of businesses. Looking ahead, the company will look at the development of a mobile-first approach, and increasing functionality for employees of businesses using MYOB solutions. Image: Andrew Birch. Source: Supplied.
https://medium.com/startup-daily/myob-launches-integrated-cloud-payroll-solution-for-larger-businesses-3c2ce9a833a7
['Mat Beeche']
2017-07-25 09:01:56.974000+00:00
['AI', 'Advanced', 'Incite', 'Advanced People', 'Accounting']
Guys Like Chris D’Elia Wouldn’t Thrive If We Didn’t Give Passes to Men Like Bill Clinton
Back in June, when I first wrote about the allegations against comedian Chris D’Elia, some people (ahem, men) thought I way was too hard on the Hollywood star. Several dudes privately reached out to me to say I was “overreacting” and that men need to be able to interact with women without fear of sexual harassment allegations. One reader blocked me because he felt I was being so unjust. Another guy commented that the girls were lying, just because D’Elia’s attorneys released emails with some of his accusers to suggest that they were willing participants in any flirtations. Oh, please. Chris D’Elia hasn’t been able to offer any real defense because he doesn’t seem to think he was ever really wrong. Yes, we all know there are underage girls who flirt with celebrities that flirt with them. Are we really expecting children to have more common sense than grown men with power and privilege? At any rate, the allegations against Chris D’Elia have since grown, and now they’re not about underage girls. More women have come forward to say that D’Elia exposed himself to them. Actor Megan Drust told CNN that she had marked D’Elia as “safe” when she agreed to give him a ride home after lunch with a mutual friend. “We are both sitting there and I’m like, ‘Where are we going?’ And Chris is leaning up against the door of the passenger side and looking at me in this really weird way and then he started to try to make flirty small talk. I was very confused because it just didn’t fit the moment. Then he took down his zipper and asked me to touch him and I said, ‘What are you doing? No.’ And because I wouldn’t touch him, he started to masturbate. I couldn’t believe it.” It gets worse, she says. “I get out and I have the door open and I walk out into the street and I’m saying, ‘Why are you doing this?’ And I remember saying, ‘You’re defiling my car.’ I didn’t want to make him mad or upset because you’re in survival mode, you know? He climaxed in his pants and then he zipped everything up and I said, ‘What’s wrong with you?’” Drust says she told a couple of friends about the experience at the time, and both friends corroborated her story with CNN. A few months later, she ran into D’Elia again at a restaurant in LA. “I’m standing there talking to my friends and I remember a guy’s voice with his lips right to my ear say, 'You.' I turned around and it was him. I just remember being very flustered and very uncomfortable.” Another friend of Drust’s confirmed that encounter with D’Elia to CNN. A second woman spoke to CNN and said that D’Elia exposed himself to her when she was the manager of Kimpton Schofield Hotel in Cleveland, Ohio. She got a call from D’Elia around midnight when he was a guest in March of 2018. He said his air-conditioner wasn’t working. She went up to his room by herself since there wasn’t an electrician available. “When I knocked on the door, he opened the door and he was completely naked. I was surprised, and I was annoyed that I just came all the way up just so he could expose himself to me.” The manager walked back to the front desk without saying a word. She says D’Elia called her back upstairs about the AC, but she refused to come up. She told two coworkers about the incident and reported it to the hotel’s management. She says management took zero action and acted like it wasn’t anything unusual. Today, the Kimpton Hotel Group denies they received a report. Both accounts jive with the attitude portrayed in Laura Vitarelli’s story. She told CNN that the comedian exposed himself to her and a friend at his hotel in 2015. The two friends met D’Elia after one of his comedy shows in New York. They took fan photos and he invited them to a party. From CNN: Vitarelli recalled that when they walked into D’Elia’s room, the lights were off and he was watching "Cops" and eating a bowl of shrimp scampi. She said D’Elia asked them to put their cell phones in a basket upon entering the room. "There was no sign of a party at all," Vitarelli said. "He said he was going to make us drinks and her and I were both a little nervous just because it really didn’t look like he was about to throw a party. There was nobody else there." Feeling a little nervous and "deceived," she said they accepted the drinks. Vitarelli noted D’Elia did not pour himself one. Vitarelli said D’Elia then sat between them on the couch and almost immediately, "put one of each of his hands down our backsides and started groping us." Vitarelli said she looked at her friend, who looked "frightened," and they made up an excuse to leave. "He got up with us and followed us to the door and said, 'Are you sure you want to leave?' And he pulled out his penis and it was fully erect," Vitarelli told CNN. "It was very uncomfortable for the both of us, and we knew we had to get out of there so we left as fast as we could." Her friend, who confirmed the incident to CNN, asked that her name not be included in this story for fear of backlash from D’Elia’s fan base. It’s not just women or survivors who’ve spoken up against D’Elia. According to fellow comedian Bill Dawes: “He was very proud of his body and he would expose himself in front of his guys he was on the road with and other male comics and he would do it kind of as a joke. He would expose himself in front of other women when other guys were in the room with him.” A female comedian also spoke to CNN but asked to remain anonymous for fear of professional repercussions. She claims she and D’Elia have been friends for more than 10 years. “We used to hang out at the 101 Café after sets and the way he would speak about women was degrading, disrespectful and demeaning to women.” Both she and Bill Dawes both claimed the initial allegations against D’Elia were not surprising. In fact, they were surprised his name hadn’t come up… earlier.
https://medium.com/honestly-yours/guys-like-chris-delia-wouldn-t-thrive-if-we-didn-t-give-passes-to-men-like-bill-clinton-1c7c7471ee07
['Shannon Ashley']
2020-09-11 18:27:46.160000+00:00
['Society', 'Culture', 'Politics', 'Women', 'Equality']
The Revolt of The Modern Indian Woman Against 1.3 Billion
The Revolt of The Modern Indian Woman Against 1.3 Billion The freedom of choice — A woman’s place is wherever the hell she wants to be Photo by Barathan Amuthan from Pexels Yesterday, I came across a Facebook post about an illustrator and her iconic art series on realistic depictions of women’s lives. The featured picture is a comparison between two perspectives. On the left is a happy woman in a dress, with her husband and baby. On the right is an equally happy, casually dressed woman holding a glass of wine and a slice of pizza. The message of this artwork, as the artist had correctly titled is that both these women are established and “complete”. Whether it is marital bliss or professional success, or both, the woman gets to choose what her definition of completeness is. I almost scrolled past thinking “Well, this isn't exactly mindblowing. Isn’t this just common sense?” This may be because I have seen these kinds of posts before and I completely agree with them. To me, it really is that simple. Out of curiosity, I clicked on the comment section and found out exactly why it is mindblowing for millions, nay, billions of people. Several outraged “meninists” were, and probably still are, hurling abuses at the fake ‘feminazis’ of the world. The vile comments showed me that such reasonable notions of freedom and choice are abnormal in the eyes of insecure, angry, pathetic people. Western countries may not have fully won the marathon of equality yet, but India is still on its first few laps. Hell, in some places here, the race hasn’t even begun. So what is it about a bold, badass Indian woman that intimidates the people of this country? She won’t take bullshit Sit properly. Don’t play in the sun. Use this face cream to get fairer skin. Who will marry you if you remain this dark? Dress decently. Are you trying to get harassed? Oh sweetie, he’s just picking on you because he likes you. Don’t be so bossy — it’s very unattractive. Listen, all this career stuff is only until you get married. Then, you need to settle down and run a family. What kind of a woman doesn't give up her career for her husband and children? Sigh. She has heard it all. And she’s had enough. She’s had enough of the patriarchy breathing down her neck, dictating her every move. She rejects an age-old system that is upheld by the vast majority of this great nation’s 1.3 billion people. Girls shouldn’t swear. Girls shouldn’t stand up to the men in their families who are wrong. A girl should be seen, not heard. Well, this girl is going to be seen, heard, and freaking brilliant. She is going to be successful at what she does. She is going to reach heights that are far beyond what your narrow mind can imagine for her. She did not inherit the silence of her mother Rewind to 25 years ago. Her mother, a blushing bride-to-be, is asked if she knows how to cook and clean. She is asked to manage a household all day, every day. And she does so dutifully. Her mother is quite selfless. She does all of the chores right from taking care of her children 24/7 and cooking varieties of meals to washing clothes and vessels. And she is merely called a “housewife”. Because tons of unpaid work is just a woman’s duty, right? Of course, in many cases, there is a good partnership going between the husband and wife. The husband brings home the cash, and the wife takes care of everything else. He puts food on the table figuratively, while his wife does so literally. So what’s the problem? Our mothers are taught to stay silent and “adjust” to their husbands’ ways with a smile. They are told to tolerate those sexist jokes, power imbalances, and bear the brunt of emotional labor(and physical, not to forget being pressured to have kids even when they aren’t ready). The “spoilt”, modern woman is not like that though. She watched her mother go through things that she won’t ignore. She demands equality She knows what she deserves, and she won’t settle for less. She has the “audacity” to demand that society treat her the same as her male counterparts. No, she doesn’t think that her husband is a demi-god for doing 20 percent of the household work. She doesn’t approve of her family praising their precious son-in-law to the moon and back, just for doing the bare minimum. And this frightens her family. Her mother looks on, perplexed, wondering if people will say she has been raised wrong. Although she is proud that her daughter is a strong woman, she is scared for her future. How will a country that has thrived on its patriarchy accept this bold woman? She will be the outrage of her in-laws’ home. I mean, what’s next? Is she also going to demand that her husband support her emotionally, and not play video games while she juggles cooking and feeding the baby? Nonsense. Now, that's just about the happily married Indian women. What about the single ones? Do we dare talk about them? She is her first priority To begin with, when are we going to stop justifying oppression with #NotAllMen? Maybe not all men, but too damn many. There are so many glorified incels who outright hate women. They wear their misogyny as a badge, blatantly and quite proudly. These men do not care enough about all the rape, harassment, domestic violence, abuse, and injustice. But oh, how they’re seething at the fact that Google didn't make a fuss about International Men’s Day! There is a rape happening every 15 minutes, but no, please do tell me more about how people celebrating Women’s Day is unfair. It’s not like women were suppressed and violated for centuries. What really bothers them is that the modern Indian woman doesn’t give a flip about them. She dresses for herself. All those tight jeans and short skirts? She is pretty pleased that she got them on sale after hours of shopping, and she will flaunt them. She doesn’t care if you think it looks tacky. She does her hair and makeup because she loves doing it. To put it simply, she lives for herself. Another person’s opinion doesn’t stop her from donning her robes like a queen. I’d like to clarify that there is nothing wrong with dressing to impress people. That’s perfectly fine. But, you can’t take away her choice to not give a damn. And so many people badly want to. If her idea of fabulous clothing is a fully covered salwar suit, that’s fine. If it’s a mini-skirt and a tank top, that should be fine too. She has her own dreams and aspirations She is out on a journey to make a life for herself — a life that she has designed and is working hard for. And she won’t let anyone’s small-mindedness stand in her way. She will not be told that her place is in the kitchen or the bedroom, or anywhere that someone else decides for her. Like we discussed earlier, she doesn’t care. Marriage isn't on her mind right now. Frantic cries about how her biological clock is ticking, are not her biggest concern right now. And this freedom to pursue something unconventional scares the bejeezus out of people. The very thought of her rejecting their fragile, heavily biased “traditions” and “values” makes their minds explode. They can’t stand a woman having her own voice, one that is not in their control.
https://medium.com/an-injustice/the-revolt-of-the-modern-indian-woman-against-1-3-billion-c913a16c6b42
['Bertilla Niveda']
2020-12-21 15:37:44.387000+00:00
['Feminism', 'Women', 'Equality', 'Culture', 'Society']
Christmas with the Drummer Boy
Come they told me Pa rum pum pum pum A new born king to see Pa rum pum pum pum Cory raised his chin, opened his mouth wide, and sang the purest note he knew how. Ms. Wilson looked up from the piano and tossed one of her sparkling smiles at him. He felt great about that, but surprised she heard him over all the other kids. Eighth grade choir was hard! It was still his favorite class, but it wasn’t all fun. He had to learn this thing called ‘harmony’ and how to sing in ‘parts.’ And how to read music! Today was easier. They were goofing off singing Christmas carols. Thanksgiving was over and Christmas was coming! It seemed so far away, but Ms. Wilson said it was close. “Class! Settle down now,” she said as she stood up from the piano. “The bell’s about to ring, but please remember this is Thursday and the GSA is using the room next. No private practice in here after, sorry.” Cory’s face turned hot as some boys coughed out something that sounded like “faggot.” Cory knew GSA meant “Gay Straight Alliance.” He even knew it was a club for kids like him. But he also knew he’d never dare stay after school for a meeting like that. Not even in his dreams! “Cory!” Ms. Wilson called, kind of loud and stern. “Becka? Audry? Can I see you guys up here for a minute, please?” Cory gasped. The end-of-school bell rang just then, so probably nobody heard, but he covered his mouth, anyway. How could she know about him? Nobody knew, not even his dad! What happens next? Finish the story on Prism & Pen.
https://medium.com/james-finn/christmas-with-the-drummer-boy-7e25b845fbe0
['James Finn']
2020-12-23 17:47:34.439000+00:00
['LGBTQ', 'Christmas', 'Love', 'Family', 'Music']
How Characters From ‘Friends’ and ‘The Office’ Would Handle the Pandemic
How Characters From ‘Friends’ and ‘The Office’ Would Handle the Pandemic What would Michael Scott be doing right now? Photo by Hannah Busing on Unsplash As a bunch of TV shows return for new seasons, many writers are leaning into the pandemic plotline that we all find ourselves living in. Honestly, I like it. I find it oddly comforting hearing Olivia Benson on Law and Order: SVU telling everyone to mask up. It’s also made me wish that I could have new episodes of shows that aren’t airing anymore, so I could see those characters handling this new reality. Since I can’t watch TV now without yelling at the screen, “SIX FEET WHAT ARE YOU DOING,” I thought I’d imagine what my favorite characters might be doing during this pandemic … to make myself more comfortable when I go to watch them. It’s sort of helping. Enjoy my musings on some of these classic characters, and please, wear a mask and social distance! Let’s keep each other safe. Friends Ross Geller Is convinced that he had Covid before anyone else did and is actually the real “patient zero.” Keeps saying, “I couldn’t smell or taste anything, I swear!” Monica Geller Can’t believe she’s living through a year where being clean and staying safe is the most important thing. It’s her time to shine. Sets up hand-sanitizing stations in every room in her apartment and buys anything she can in bulk. Sprays everyone down before they come anywhere near her. Throws the most elaborate socially distant park picnic anyone has ever seen. Rachel Green Only wears masks that perfectly match her outfit. Figures if she survived all that college partying then she can definitely survive this pandemic. Chandler Bing Takes the pandemic very seriously, but has never made so many jokes in his life. Joey Tribbiani Brings a tape measure with him everywhere so he knows exactly how much distance is six feet, but still goes on tons of dates. Claims it doesn’t count somehow. Phoebe Buffay Knits all of her friends masks that either don’t fit or have holes in them. The Office Michael Scott Tried to petition that his branch be the only branch allowed to keep working from the office. Kept saying, “You can’t break up our family!” It didn’t work. Now calls way more meetings than he used to just so he can get everyone on-screen together, and changes his Zoom background every single time. He’s also started showing up outside everyone’s houses with a megaphone for what he calls “social distance comedy hours.” Dwight Schrute Is convinced he’s immune to the virus because of his superior Schrute DNA but is also positive that the pandemic marks the beginning of the end, and spends most of his time in his emergency bunker. Before the office shut down, he wore his hazmat suit to work every day, just to prove a point. Jim and Pam Halpert Every Friday night, they do a different cute quarantine date to keep things exciting. Jim has also taken to hiding things around the house for Pam to find as something to do instead of working. Since he can’t prank Dwight in the office, he frequently drives to his house and sets up different pranks around his farm. Social distance style, of course. Angela Martin Never attends a Zoom meeting without at least three of her cats. Ryan Howard Claims he’s found an unknown company that’s figured out the cure for Covid and keeps asking for donations from everyone to support them. Meredith Palmer Keeps asking random men if they want to quarantine with her. Definitely hasn’t grasped what quarantine really means. Oscar Martinez Loves to critique the way people use Zoom and tries to give everyone “work from home tips” constantly. Stanley Hudson Is thrilled to finally have an excuse to stay six feet away from everyone at all times. Kevin Malone Left a sandwich in the office fridge before it closed. He’s hoping it’ll still be good to eat once it opens up again. Kelly Kapoor Buys a ton of masks that match all of her outfits. Gets upset when Ryan doesn’t notice. Andy Bernard Learned all about pandemics at Cornell University so he feels like he’s pretty prepared for this. His biggest annoyance is that his family vacations keep getting canceled. “You can’t catch Covid on a boat,” he claims. Erin Hannon Still isn’t quite sure what Zoom is or how it works. Phyllis Vance She and Bob Vance (of Vance Refrigeration) left town at the beginning of the pandemic and no one’s seen her since. Darryl Philbin Has ignored every call from Michael since the pandemic started. Claims covid has infected his phone and laptop. Michael believes him. Toby Flenderson Is never on any of the group Zoom calls because Michael refuses to let him in.
https://medium.com/the-innovation/how-characters-from-friends-and-the-office-would-handle-the-pandemic-29ffd32c6b41
['Caitlin Jill Anders']
2020-12-28 17:33:03.358000+00:00
['Humor', 'Movies', 'Creativity', 'Culture', 'TV Series']
For heaven’s sakes, creative people, go get your own domain name.
A client I worked with had thousands — thousands — of followers on Facebook. Woke up one day and the page was gone. Poof. Facebook deleted it. Violation of terms, they said. Which term? Who knows. Facebook never answered. They begged and pleaded to have their page restored, but sorry was all they got. A woman that runs a coaching business got booted off Twitter. They told her it was because of some of her (nursing) photos that showed inappropriate content. People complained, they said. Was that really why? No idea. Audience, gone. Overnight. That’s what mattered, at least to her small business. She had to start over, which kind of sucked. It’s not just Facebook A lot of writers, creatives and small businesses don’t bother to get and use a domain name of their own. Why? It’s so easy to use Wix, Weebly, Facebook, Twitter, Etsy, Instagram, etc. So they’re promoting a url like: — authorsfullname.wixsite.com — myverylongbusinessname.weebly.com — facebook.com/somelongnameprobablywithnumbers — twitter.com/HeyCloseEnoughAndAvailable — etsy.com/shop/SomeLongShopName — instagram.com/nameorbusinessnamehere For heaven’s sakes, creative people, go get your own domain name. A domain name costs less than $10. Use a namecheap coupon and get one for $8. Less when they have a sale. (And no, that’s not an affiliate link.) Folks, domain names can be forwarded. Free. I get it. You need a platform that’s easy to use. I’m not telling you to learn to be a coding genius and get a self hosted package and figure out how to build a site from scratch. Use what works for you. But own your own name, for goodness sakes. Register BobTheWriter.com or and forward it wherever you want. To your (very long) weebly url or etsy or wix or instagram or your facebook page if that’s how you roll. At least if there’s some catastrophe, you can just change the forward. Perception matters… A lot of creatives work hard to build a reputation and a following. But your name is part of your reputation and SuzannaSmithWriter.weebly.com does not have the credibility that SuzannaSmithWriter.com has if you hear what I’m saying. But hey, your call. You could be like the girl who landed some really sweet media coverage and a full page feature in a major magazine — and sent all that exposure to a long url that wasn’t working when I tried to go see more. /rant
https://medium.com/linda-caroll/for-heavens-sakes-creative-people-go-get-your-own-domain-name-e046c18ca83c
['Linda Caroll']
2020-09-01 18:51:19.971000+00:00
['Creative', 'Advice', 'Writing Tips', 'Creativity', 'Business Strategy']
5 Reasons Lady Gaga’s “Stupid Love” Feels So Refreshingly Retro
I have strong opinions about pop music. I was born in ’88 and grew up in the 90s, so my basis of modern pop tunes is everything from Mariah and Michael and Sheryl to Britney and Ricky and J.Lo. Jock Jams. Now That’s What I Call Music! I remember hip-hop sliding itself into the Top 40 in a huge way during the turn of the millennium, and alternative rock struggling to stay mainstream if it was anything less than catchy. As of 2020, popular music has hedged in more directions that I can count. Between the 90s bubblegum pop surge and now, we’ve seen the charts topped by almost every genre there is: R&B, EDM, country, trap, soft rock, latin dance, folk, synth-pop, soul… Think about how utterly lucky we are to exist in a period of creation where this much variety in music is available to us. In case you need a refresher, Lady Gaga emerged in 2008. First, as a chic underground pop star in small venues, and in a behind-the-scenes video series on YouTube called Gagavision. (Never knew this existed? Do yourself a favor and watch the first one. Diehard fans, you’re welcome. Enjoy this gem of a throwback.) The following year, she dropped “Just Dance.” (Side Note: 2009 was the best damn year for modern pop music on record. Here’s the evidence. Come fight me about it.) Then “Poker Face,” closely followed by the artsiest tour you ever did see. Dec. 2009: The Monster Ball at the Fox Theatre Atlanta. Taken with a Blackberry, y’all. Soon, she started talking about disco sticks and showing up in outrageous outfits — and making big, bold statements like her now legendary 2009 VMAs performance. “If I’m gonna be sexy on the VMAs and sing about the paparazzi, I’m gonna do it while I’m bleeding to death and reminding you of what fame did to Marilyn Monroe. And what it did to Anna Nicole Smith. And what it did to…yeah. Do you know who?” (Answer: Amy Winehouse.) That performance was when I realized that this woman did not come to play. She was bringing a punk spirit to the pop world that I hadn’t experienced before. Not Hot Topic-brand, Avril Lavigne punk. Not sad alternative aggression that wishes it were punk. But using her influence to say important things punk. Grossing out her critics just for shits and giggles punk. Wearing a meat dress punk. (Icky, but I admire her gall.) I don’t know what to say about the state of pop music now… I don’t hate it, but most days, I’m not excited to stream a Top of the Charts playlist. Enter Gaga, broadcasting live from Chromatica: My immediate thought was an emphatic, “FINALLY. This is what pop music needs right now.” And by “this,” I mean vibrancy and positivity and fun and a return to the sound that made it so infectious in the first place. (The only mainstream artist who’s doing that at the moment is Lizzo, who’s wonderful.) That’s when I realized that this song and this video have a distinct retro feeling to them. They don’t exactly feel like they’ve come out of 2020 — they feel more like bits and pieces of my childhood. And yet, the sound is crisp and loud and joyfully reckless and as danceable as any hit beyond 2000 has been. It’s taken me several watches and listens, but I finally landed on a handful of things — that cannot possibly be unintentional — that make for delicious throwback vibes, rendering “Stupid Love” an automatic hit. one | It feels pretty, uh… mighty and morphin’. This is something viewers picked up on immediately. The concept of the video revolves around the planet of Chromatica, where several tribes are battling for dominance. Each tribe dons a different color with a matching tribal symbol. There have been other comparisons to things I don’t have experience with — like the game Bayonetta and the iconic Star Trek — but if you ever watched Power Rangers in the 90s as a kid, these colors will undoubtedly jog your memory: Missed these tribal symbols flashing onscreen during the video? Me too, until like viewing number 9. two | Graphics poppin’ up left and right… It’s true that the tribal seals pop up and rotate onscreen for a few seconds during the video, but you may have missed the other graphics that made an appearance. Remember the early days of music video blocks on MTV, VH1, and The Box? Remember how you knew what you were watching and what was coming up next? Check the bottom-right corner: On the left: the neon pink “Stupid Love” logo. On the right: the chrome “Chromatica” logo. three | A disco beat + 80s synths always win. Okay, this one isn’t unique to modern pop… but it’s on this list because it’s further proof that disco is one of our favorite pieces of musical nostalgia. Any song that opts for a disco vibe automatically reminds us of energy and movement and fun — and familiar standby songs by the likes of Donna Summer, Whitney Houston, and Nile Rogers. Disco-inspired pop tends to finds its way onto party playlists, ambient shopping tracks, slick commercials, and workout playlists. Even now, in its mutated, evolved forms, we love that shit. See also: four | Rave fashion. Club culture existed in the 60s, of course, but it was killed gently by the much looser hippie movement. It wasn’t until the 80s and 90s that “rave” came to mean something way more distinctive in American culture in particular: Neon. Excess. Cartoonish. Glowsticks. DIY outfits. “Cyber”-everything. One look at the costumes of Chromatica is all you need to know about how Gaga feels about rave culture. (If you were around for the ARTPOP era, though, you already know what’s up.) five | The big, showy group dance. It’s not unheard of in newer music videos — and it’s always been a staple of tour choreography for most pop performers — but music videos less and less often feature a good old fashioned, “Let’s all join in a massive group dance for no apparent reason other than it looks SO FUN.” This is a video element that harkens back to the greats: Michael, Janet, Madonna, Britney… A simple(-ish) repetitive 8-count of choreography is something we all know and love, and it’s something Gaga and her choreographer have smartly incorporated into her biggest hits. Even if you’ve forgotten the words to “Bad Romance” and “Poker Face,” you probably remember the monster claws and face-swiping moves attached to those tunes.
https://nikkiddavis.medium.com/5-reasons-lady-gagas-stupid-love-feels-so-refreshingly-retro-9bb3c89bb92e
['Nikki Davis']
2020-03-02 16:27:01.398000+00:00
['Music Video', 'Pop', 'Lady Gaga', 'Pop Culture', 'Music']
Deer Season
The sun rose and began its languid trip around the sky. Its light stretched among the woods, finding hidden spaces, covering others in new shadows. Dawn’s warm yellow light didn’t capture the way the frigid air stopped your breath, or how on really early mornings if you didn’t keep the air in your mouth for a little while it would sting your lungs. On November mornings, the world was still quiet. Their footsteps crunched in the old snow. He stepped on one large fragment of ice, hidden under the snow, and one sharp crack emanated from under his small boot. This brief interruption of the unearthly silence made them stop for a full minute. His grandfather turned, keeping the rifle slung over his shoulder, and brought one finger to his lips. He mouthed sorry. The shadows shrank as the sun rose higher in the sky. Unable to keep silent, the boy rode on the older man’s back while he carried the rifle. He surveyed the forest and tugged on the strings connected to the old man’s hunting cap. The woolen ties bounced with each of the man’s steps. The boy smiled a secret smile. This is what he wanted to do. He remembered his grandfather touching him gently on the shoulder last night after grandma had put him to sleep. Grandpa asked if he wanted to go. He nodded. He never wanted anything so bad. “You’ll have to miss school,” his grandfather said. “I don’t care,” the boy said. “I didn’t think you would.” They had to keep it a secret. Grandma would never have let him come. He was too little, she would say. Besides, he really should be in school. Grandpa told him he would take him to school today since it was too cold to be waiting for the bus. They gathered in the old blue Ford. Its time on the road and life in the woods painted the truck with a thick layer of mud, so deep it was hard to tell the original color. They smiled and waved to Grandma and drove to the corner. This was the moment of truth, if his grandpa turned right, then they were going to town and he would spend another day staring out the window at school. But, the old truck swung left and they made their way deeper into the woods. The old man pulled over near a trail and they got out. Because of their ruse, the boy was still dressed in school clothes, but his grandpa had snuck outside in the middle of the night to put his hunting clothes outside. The boy changed in a hurried manner, trying to keep the cold out. They trudged through the snow, the white expanse covering the forest floor and leaving a road map of the visitors from last night. There were squirrel tracks, dotting the snow between the trees in hope of trying to find their winter stores of food. Rabbits had traced a maze through the forest. They paused when his grandfather found a deer track. The boy was still on his back and he peered over grandpa’s shoulder as the old man pointed out the dent in the snow. “It’s just a little fella,” the old man said. And they continued. Finally, they found the blind. It was an old wooden hut placed ten feet high in the trees. The boy went up first, his little boots causing the ancient wood to squeak. The inside was simple, just a wooden bench and one open side, the point from which they would hunt. The old man followed up the ladder. He set the rifle down in the corner along with bag he was carrying. He watched the little boy stare outside, his eyes taking in the wonder around him. He looked out over the expanse of forest before him. The leaves had fallen and now all that remained were the tree’s skeletons draped in the late autumn snow. It was then that he knew he had done the right thing keeping the boy out here. School could wait, he thought, there were only so many years for his little mind to see these things. Before they were ruined. He reached down and adjusted the boy’s hunting cap. It was a hair too big for the boy, but it was once his father’s and after hearing that, the boy refused to wear anything else. They sat, letting the still morning air bathe them. It was one of the best times of the year, both of them thought at the same time. It was cold though, even with the rising sun, it felt as though the temperature was still dropping. The old man could feel the frigid air seeping into his bones, and it made it hard to move. He looked at the little boy, he could tell he was cold but the boy refused to admit it. He had moved away from the opening and was sitting on the pine boards. Even with their heavy clothing, and stout boots, and wool hats, it was cold. “Come here,” the old man said and the little boy did as he was told. His grandpa reached into the bag and pulled out a thermos inside of which contained his grandma’s homemade hot chocolate. Every year she made it for his grandpa. They took turns sipping from the cup and letting the liquid warm them from the inside. The old man smiled and so did the little boy. The old man reached into the bag once more and pulled out the brown bag. Inside were the sandwiches his wife made for him. He pulled one out and took off the wax paper, delighted to see the peanut butter and jelly slathered on white bread. He pulled out another package and handed it to the boy. He peeled the paper away to find a mangled plain peanut butter sandwich with the crust hacked off. The old man smiled as he thought of his handiwork, trying to put together the sandwich in the middle of the night with the lights turned off, so as not to wake his wife. He didn’t want to alert her to his plans. The boy thanked his grandpa; he hated jelly and quietly nibbled on his favorite sandwich, glad his grandpa made the special trip to the kitchen in the middle of the night. They sat for another hour in silence, trying not to let the forest know they were there. Chickadees chirped in the tree next to them, and they could hear the intermittent pats of squirrels running through the trees above them. Before long though, the old man knew the quiet beauty of nature would only hold the boy’s attention for so long. Again, he reached into his bag and from it pulled a deck of cards. The boy smiled, delighted to have a small pause from the tedious task of searching the forest for moving shadows and rustling limbs. At this point, the only card game the boy knew War so the old man delighted the young boy with hours of the card game until the repetition bored both of them. The old man was putting the cards in his pocket when he saw it. Perhaps thirty yards from the blind, something walked through the forest. The old man held a finger to his lips and pointed, and watched the boy’s eyes widen as he watched the beast lumber through the tangle of trees. Soon it would cross into the sight lines they had cut a few weeks ago. The deer stepped out from the shadows nibbling on a tree branch. It was large; the old man saw at least eight points. It moved with such grace for a large animal. It no longer had the bright brown fur of a summer buck but it had changed to the more muted brown, almost gray tone to help it blend in with the naked winter forest. The old man and the boy watched for ten minutes, neither of them thought of the rifle in the corner of the blind, and content to watch the beast roam the forest. Finally, a stick broke and its sound resonated through the woods. The deer heard it, and it immediately stood still before it bounded into the safety of the deeper woods. The old man sat with his arm around the boy, “that was really cool,” the boy said. “Yes, yes it was,” the old man said. “I’m glad you didn’t shoot it,” the boy said. “To be honest, that rifle doesn’t even have bullets in it,” the old man said. They had to return earlier than either one of them wanted to, but in order for their charade to work, they needed to be home at the boy’s regular time. He quickly changed in the truck, “Thanks, grandpa,” the boy said. “You’re welcome,” the old man said. Just before he slid out of the truck, the boy asked, “Grandpa?” “Yes?” he said. “Did you used to take Dad when he was little like me?” the boy asked. “Yes,” he said. “Did you ever shoot a deer with him?” “No.” For years the boy was confused. There was a joke around town about how his grandpa could never get a deer, but at home, his grandma told he was the best hunter she knew. He thought he knew what she meant now. “Grandpa,” the boy said. “Yes,” the old man said. “I miss Dad,” the boy said. “I miss him too,” the old man said. The triumphant hunters walked in the house to find grandma was busy cooking in the kitchen. “What are you doing home so early?” the old man asked. “Well,” she said, “ I figured you two might be hungry after the day you had.”
https://medium.com/the-inkwell/deer-season-ded527011f76
['Matthew Donnellon']
2019-11-02 00:27:40.021000+00:00
['Short Story', 'Fiction', 'Outdoors', 'Family', 'Creativity']
Extracting Audio from Video using Python
Getting Started This will a short and simple project. It will not take us more than five minutes to finish the whole project. Before moving to the programming part, I would like to give some information about the library we will use in this project: MoviePy. Let’s get started! As you can understand from the title, we will need a video recording for this project. It can even be a short recording of yourself speaking to the camera. Using a library called MoviePy, we will extract the audio from the video recording. If you are ready, let’s start by installing the libraries! Installing a module library is very straightforward in python. You can even install a couple of libraries in one line of code. Write the following line in your terminal window: pip install ffmpeg moviepy Ffmpeg is a leading multimedia framework, that decodes, encodes, transcodes, mux, demux, stream, filter and play pretty much anything that humans and machines have created. (Reference: http://ffmpeg.org/about.html) MoviePy is a library that can read and write all the most common audio and video formats, including GIFs. In case you are having issues when installing moviepy library, make sure the ffmpeg library is installed correctly. Now, let’s get to programming. You can use a text editor or Jupyter Notebook. We will start by importing the libraries.
https://towardsdatascience.com/extracting-audio-from-video-using-python-58856a940fd
['Behic Guven']
2020-12-28 13:09:44.954000+00:00
['Coding', 'Programming', 'Artificial Intelligence', 'Technology', 'Machine Learning']
My Preparation Journey for Google Interviews
A. Coding Software Engineers often find themselves in challenging situations e.g. dealing with ambiguity, unclear requirements, breaking down complex problems, handling edge cases, finalizing approach considering trade-offs, etc. Coding interviews are one way to get a glimpse of these skills. To keep it simple — Coding rounds are focused on Problem Solving with Data Structures & Algorithms. These questions tend to be tricky and provide valuable insight into the candidate’s analytical ability. Become sharp at solving Data Structures & Algorithm problems. This skill is gained over time. There’s no shortcut; only real formula is consistency. Practice, Practice, and Practice until you develop a natural problem-solving skill. Preparation Strategy 1. Estimate Preparation Time This is often ignored and not considered necessary. I suggest calibrating your current hold on problem-solving in DSA. I looked into my stronger and weaker areas and did a rough estimate of preparation time. This estimation helped me prepare my mind for the long-term (or short-term) goal and kept the motivation up. “Give yourself sufficient preparation time. Its always better to be over-prepared than under-prepared.” Total duration can vary depending on your expertise. At a broad level, I have categorized in below buckets. Beginner — Can comfortably code in at least one programming language. Lacks basic knowledge of DS and Algos. Struggles (or takes considerable time) to solve Easy difficulty problems. Intermediate — Good knowledge of DS and Algos. No problem with Easy difficulty. Can solve most Medium problems. Struggles with Hard. Advanced — No problem with Medium difficulty. Able to solve most Hard problems. I put myself at Intermediate level before interviews. Preparation Time Estimate 2. Coding and Learning Platforms LeetCode, InterviewBit, and GFG were my leading go-to platforms for coding practice. Before the interviews, I solved around 320 LeetCode, 80 InterviewBit, and 30 GFG questions. Problem Difficulty Distribution Medium difficulty problems are essential as most of your interview questions will fall under this category. Solving these will immensely improve your speed and problem-solving ability. It’s important to start with a mix of Easy and Medium level question at an early stage. Begin with Hard problems once you gain an adequate level of confidence. Don’t get demotivated if you are unable to solve the Hard ones. These problems can take a longer time to practice and perfect. Whenever I felt overwhelmed, I came back to an Easy for a motivation boost. LeetCode: Without a doubt, one of the best platforms out there. The best thing about LeetCode is its community. Discussion forums are super useful and provide multiple approaches. Don’t think twice about opting LeetCode Premium ; it’s totally worth every penny. Without a doubt, one of the best platforms out there. The best thing about LeetCode is its community. Discussion forums are super useful and provide multiple approaches. Don’t think twice about opting ; it’s totally worth every penny. InterviewBit: I highly recommend following the Programming Track. Experience on this platform is closest to a real interview. Sometimes your code passes all test cases but might not be time/space optimal (as required during an actual interview). InterviewBit reports these submissions as suboptimal, providing you additional feedback. I highly recommend following the Programming Track. Experience on this platform is closest to a real interview. Sometimes your code passes all test cases but might not be time/space optimal (as required during an actual interview). InterviewBit reports these submissions as suboptimal, providing you additional feedback. GFG: I utilized this platform majorly for problem discovery and DSA fundamentals. Topic explanations and language tailored implementations are really good. GFG also has a descent collection of company and topic-specific problems. I did not wholly rely on a single resource for learning. Each resource provided me newer insights. I kept a journal, always aggregating and expanding my knowledge along the way. Algorithms Specialization : This Coursera track rocks! It has a total of 4 courses, covering all basic and a few advanced DSA Topics. Great for beginners. This Coursera track rocks! It has a total of 4 courses, covering all basic and a few advanced DSA Topics. Great for beginners. Youtube: Video explanations are my cup of tea! Naming few channels with helpful content — Rachit Jain, Abdul Bari, Tushar Roy (awesome walkthroughs!) and BackToBack SWE. Video explanations are my cup of tea! Naming few channels with helpful content — Rachit Jain, Abdul Bari, Tushar Roy (awesome walkthroughs!) and BackToBack SWE. CTCI & EPI : I used these as supplemental to preparation. These books provided me with a quick brush up on Theory & Problems. It can sometimes feel time-consuming to target individual topics here. Instead, I opted to cover the sections breadth-wise before interviews. I used these as supplemental to preparation. These books provided me with a quick brush up on Theory & Problems. It can sometimes feel time-consuming to target individual topics here. Instead, I opted to cover the sections breadth-wise before interviews. CLRS: Sometimes used to lookup pseudocode. Handy resource to gain in-depth knowledge on time complexities and theory. Last but not least BaseCS articles by Vaidehi Joshi. She writes intuitive & straightforward explanations on several DSA topics. 3. Using a Timer As the interview durations are getting shorter, it’s crucial to work on your problem-solving pace. Generally, a coding interview is 45–50 mins long and a candidate is expected to solve — 2 x Medium OR 1 x Hard OR 1 x Easy plus 1 x Hard Follow-up. Even if you could solve the initial question but consume a longer time, this means you won’t have sufficient time to tackle the second one. I used a timer to time myself during problem-solving sessions. Medium Problem: 20 mins Hard Problem: 40–45 mins Beginners can choose to ignore this factor since getting towards a correct solution is obviously more significant. 4. Mock Interviews I gave a bunch of mock interviews before the actual ones. These can prove quite beneficial. Failing early in a test environment gives you useful insight. It will help you to detect the gaps in your thought process. Try to rectify each mistake and get better with every mock interview. “If you crack consecutive mock interviews, consider it as a positive sign.” Paid Mock Interviews come with additional benefits. Especially, Post-interview feedback gives lots of details on the interviewer’s expectations. 5. Prepare a roadmap Until now, we spoke a lot about different elements that can go into preparation. Now let us try to connect these pieces together and create a roadmap! Consider below snapshot of my calendar, a month before interview. Preparation Schedule I split the entire preparation into a set of tasks/milestones. I assigned daily goals weeks (or even months) ahead of the interview. This method helped me to avoid randomness and prevent getting lost along the way. On weekdays I could only allocate 2–3 hours as I was engaged in office work. I scheduled problem-solving sessions in this time-slot. Theory topics were reserved for weekends when it was possible to dedicate a generous amount of time. Closer to the interview, I scheduled Mocks. During the final weeks, I reduced coding sessions and focused more on reading CTCI and EPI. I know a lot of us have family commitments and full-time jobs. A forecasted schedule might not always go as planned. But the whole idea here is to generate a habit. Keep track of your progress and pending items. Continue tweaking until you find a schedule that works best for you.
https://medium.com/swlh/my-preparation-journey-for-google-interviews-f41e2dc3cdf9
['Shantanu Kshire']
2020-12-28 10:19:23.216000+00:00
['Software Engineering', 'Interview Preparation', 'Data Structures', 'Google Interview', 'Algorithms']
The Future Abortionists of America
Many students come to the conference in need of practical instruction. Depending on their university and their residency, without MSFC, medical students might find themselves stuck learning how abortions are performed from YouTube. On the conference’s second day, it offered two-and-a-half hours of intensive instruction broken up into first and second trimester sessions, for the attendees who needed it. Chiavarini, with her hyper energy and theater background, presided over the first overview. “Trigger warning!” she announced too late as the image slid in. “Whoops! Anyway, those are fetal parts.” Humor is one of Chiavarini’s ways to shock the students a little, to get them thinking less like civilians. She tells jokes that would make most people blanch. Because knowledge about the uterine reproductive system is taboo even within medical schools, it was hard for Chiavarini to know where to start. She quickly glossed the curiosities of what the “first trimester” even means. For laymen, the math seems easy: nine months, three trimesters, three months each. But doctors measure pregnancy from the first day of what they call the last normal period. The first trimester lasts 13 weeks, but by the first day of the first missed period, the official count is already at four. To an uninformed public, the confusion around this measurement could imply that patients have taken longer to seek out treatment than they actually have. It also means that abortion bans based on weeks are counting days prior to conception. Chiavarini explained the different procedures for medication abortion and surgical abortion to the full and rapt conference room. The medication regimen she recommended involves doses of mifepristone and misoprostol, which together block progesterone (“Pro-gestation, get it?”), dilate the patient’s cervix, and induce uterine contractions that expel the products of conception. As an instructor, Chiavarini consistently acknowledged—then sliced through—the thin film of embarrassment that covers the subject, even for med students. Patients might prefer medication abortion for the sense of control, Chiavarini said, or because they can expel the products of conception in the relative comfort of their own homes. Still, medication abortion requires patients to return to their provider and undergo an ultrasound to make sure all tissue has passed from the uterus. If patients are coming from out of town—which is common, since 9 out of 10 counties in America lack their own providers—a surgical procedure is a safer and more efficient choice. Chiavarini told the story of a college student who had an incomplete medication abortion and, unaware she was still pregnant, returned to campus. She didn’t get to the clinic until a day before her state’s 22-week ban would have forced her to bring the pregnancy to term. Chiavarini performed what should have been a two-day procedure in the legally available one day. She cited this as an example of the “flexibility” required by the job. To begin a first-trimester surgical abortion, the provider administers a paracervical block, which is two painkiller shots into the cervix. “Vaginas are not sterile,” Chiavarini reminded the audience as she demonstrated her “no-touch” technique for handling the metal dilators (small rods with the ends tilted at angles and tapering to different widths), flipping one between her fingers laterally to access either side. Passing the dilators around, the attendees mimicked her movements automatically. After the provider dilates the cervix, they insert into the uterus the cannula, a rigid or semi-flexible plastic tube averaging around 10 mm in diameter, which is narrow—the size of a pearl, significantly smaller than a dime. In the first six weeks of a pregnancy, it’s possible for the gestational sac to fit through the tube whole. Chiavarini mentioned receiving a texted picture from her friend, another provider, of a sac pulled successfully intact, a sort of abortionist’s bull’s-eye. “You’ll do these things,” she told her audience about texting gestational sac photos. “You think you won’t, but you will.” The abortionist evacuates the products of conception through the cannula and attached tubing, into the aspirator, which is emptied into a bucket. Despite what the name might imply, surgical abortion is quicker and simpler than medication abortion, and it’s the more common procedure. “The truth is, doing most abortions is technically easy,” Chiavarini said. “But patients bring with them their stories and their complex lives and situations, and that’s the part that’s hard.” Whether surgical or medication, serious complications are rare. Chiavarini listed penicillin, driving, and (indeed) giving birth as statistically riskier. “We’ve been put on the periphery of medicine because we do the dirty work.” While American maternal mortality has increased alarmingly in recent years (an increase of almost 60% from 1990 to 2015), the number of abortion mortalities is so low that the Centers for Disease Control and Prevention (CDC) calculates using five-year averages. Over the last three years for which there are data (2011–2013), the CDC reported 10 total abortion deaths, and the agency has not recorded a fatality due to an illegal abortion since 2004. It’s in the interest of pro-abortion-rights protesters and antis alike to dramatize the dangers around the procedure, but the numbers are a testament to the quality of care at the clinics—most visibly, Planned Parenthood—that perform 95 percent of abortions in the United States. One reason abortions are safer than they used to be is that the patients who seek them do so earlier. At legalization in 1973, fewer than 40 percent of abortions occurred in the first eight weeks of pregnancy; now, it’s up to two-thirds, and over 90 percent are performed in the first trimester. That means that most patients who choose to terminate a pregnancy do so during their first missed menstrual cycle and before the embryo develops into a fetus. Factual statements like these have a political quality to them, but they’re also essential to understanding the procedure. As the only group eager to talk about specifics, antis have defined abortion in the public imagination. But compared to the “baby-killing” picture Americans of all ideological positions have internalized to a certain degree, the tools are incredibly small. The smallness of the cannula, for example, presents a problem for anti-abortion propagandists, who insist on depicting products of conception as having visibly human features, rather than the actual pearl-sized cell clusters they are. But as overwhelming as the antis are—both vigilante and in government—the providers and students seemed most frustrated with a medical establishment that has marginalized them and overloaded them with work at the same time. There’s pride to being part of the small corps of abortionists, both in the work they do and in the obstacles they have to overcome to do it. They’re idols in the progressive feminist communities they belong to. But not everyone who wants to perform abortions also wants to be brave for a living. Today, they’re not left with much of a choice.
https://gen.medium.com/the-future-abortion-providers-of-america-9238b1664b93
['Malcolm Harris']
2019-09-13 18:23:33.190000+00:00
['Youth Now', 'Politics', 'Health', 'Abortion', 'Power']
Data Focused Decision making for Organizations: A DSI Case Study
In late Spring of 2018, I was elected President of DSI (Data Science and Informatics), which is the Data Science student group at the University of Florida. We teach workshops (Python, R, NLP, ML, you name it) and grow the Data Science community. Soon after my election came this: “What kind of idiot would I be if I ran a Data Science Organization without applying Data Science to it” The rest of this post elaborates how we, throughout the Fall of 2018, brought the organization from the state of “we have very little data and the data we do have is unusable” to “we have an organized and useful source of data and have begun to take action from our generated insights.” Over the years of reading data science-related posts, I’ve often felt like this sort of data engineering/collection/synthesizing work is underrepresented, so here we go! Data Sourcing Thankfully, DSI has a history of creating sign-in sheets for our workshops with detailed information about the participants including names, emails, majors, and the extent of their programming experience. However, the data we have kept over the past 3 years has not been kept with analytics in mind, and before the data cleaning process, the set of auto-generated google sheets looked a bit like this. One year, DSI had tracked participant’s class as a string (Freshman, Sophomore, etc), another year had years at UF as integers (1, 2, etc) without the ability to distinguish between first-year graduate students and first-year undergrads and even a third kept academic standing (Undergrad vs Grad school). We had tracked email 5 (!!!!) different ways: Email, email, e-mail, Contact (email) and E mail. These discrepancies were clearly created over the years as the executive board turned over and new people were creating sign-in sheets, which makes sense and comes from good instincts! But the end data is partially unusable as it is not analytics first. The lesson here is: Any time spent on data intentionality is compounded 10x as the data grow. DSI has become a pillar of analytics and teaching at UF, and as the organization matured, serving over a thousand students per year, the data’s issues grew alongside. Standardizing and Automating Standardizing data is equivalent to asking, in our case, what do we want to learn about people who come to our workshops? The easiest way to tell what an org/company cares about is to find out what they track. Is it user growth? Repeat attendance? Demographic characteristics? Once your organization has come together and figured that out, standardization comes more naturally. Our solution? Templated forms and the R package googlesheets. The form ensures that the same data is kept time after time, and the package automatically scrapes the sheets and pulls the data together. The new executive board is creating a better solution using login information and databases, but google sheets and a couple good R (or Python) scripts should do in a pinch. Finally, at this point, we had a relatively clean dataset with DSI’s history over the years, and we could attempt to use this data for mission-driven organization change. This is, in my opinion, the hardest part of Data Science because you never really know if anything you’re working towards will be useful. What if we spent all this time, all this effort, for nothing? There is no a priori way to know the value of data, only a posteriori. This is really why discussing the data collection and cleaning is so tremendously crucial because it comprises 80% of the workflow. There is no business end to DSI; we teach and help because we enjoy it and find it fulfilling, and these lessons we’re learning as an organization, however cheap now, are invaluable to young data scientists in the workforce Exploratory Data Analysis Back to the analysis: using data to further DSI’s missions. This begs the question, what is DSI’s mission? For the first few years of DSI, it was to learn and teach as fast as we can. This has worked quite well, as, in its history, DSI has had ~2500 attendees. This graph is cumulative attendance, but it’s pretty clear that as our content and outreach continue to improve (thanks to a much shorter feedback loop than nearly any other group at UF), students will want to learn programming skills. The breakdown of DSI attendees is expected, with the plurality being technology majors but with significant numbers from social studies, engineering (formal sciences), business, etc. This next visualization, a histogram of return attendees, was the one that really struck the DSI exec board. A huge percentage of people who went to DSI went only for one or two workshops (85% in total). This is rather unsurprising, as there isn’t a strong reason to attend an intro to Python workshop multiple times. DSI has done a tremendous job of being a place for learning data science at UF but hasn’t approached a different problem: creating a data science community. This created a leaky user bucket for user retention, which wasn’t our intention. Creating Community Starting a community is difficult for a few reasons, one being there are really only proxies for good metrics. Is having a high return rate all the evidence needed for a community? Certainly not. It seems like a necessary but not sufficient proxy for community. We took three main initiatives in the Fall of 2018 to try and build this community. First, we created Data Gator, UF’s first data science competition in collaboration with UF libraries (if you’re reading this as a UF student, you should enter!!). Then, every other week, we came up with an event called Data Science Wednesday, where our theory was: Community = Data + Coffee + Food + Time. We’ll give students food and interesting datasets, and see what they come up with. One dataset focused on detecting poisonous mushrooms, another on playing Fortnite while high, others on bike share rides, and even one on statistics about Pokemon which produced the graph below. And finally, after looking at the breakdown of our workshops, we found that industry-specific workshops had higher percentages of first-time attendees (in our first Natural Language Processing workshop, we had a plurality of the linguistics Ph.D. students because there were no Python classes taught by the department). We then continued to develop more niche workshops, like Statistics for Data Science, a wonderful Tableau workshop, and even an Actuary workshop, to attract different parts of campus. Results It’s difficult, and probably a bad idea, to evaluate changes to an organization after less than a semester. With that in mind, a few numbers popped out of the last semester. Fall 2018 was the highest attended DSI semester ever and the percentage of students who came to more than two workshops doubled. I’m really excited about these results, not only as a potential success story of data-focused decision making at an organizational level but also with where the exec board after my tenure is bound to take the org. Anyway, that’s the more complete story about how we did all the boring but highest leverage data work at a student org and saw some great preliminary results by looking at some nice graphs, clearly defining what we wanted, and making some changes we could measure. The exec team has some other projects they’re working on, including building a login system and proper databases for the organization and trying to make a ‘how much pizza should we order’ model to optimize our budget. There is no group at UF that I’m more excited about (find them here and attend the annual symposium at the end of the month), keep an eye on this space! As a plug, if you’re a hiring manager and you’ve made it this far, congrats! Your prize is this advice: Hire these people early because I’m sure there will be a bidding war for all of them soon enough. Special thanks are due to Delaney Gomen, who was instrumental in the data cleaning and also was behind a lot of the visualizations. Some of this post, in presentation form, can be found here from a talk I gave at UF in the Fall of 2018 and a portion of the code can be found here. See more analytics work like this on my website or follow me on twitter.
https://towardsdatascience.com/data-focused-decision-making-for-organizations-a-dsi-case-study-5f332a01d80a
['Tyler Richards']
2019-03-13 13:13:41.034000+00:00
['R', 'Python', 'Data Science', 'Data Visualization']
Sweet Cecilia’s ‘A Tribute to Al Berard’ Is Up for a Grammy Award
Sweet Cecilia Louisiana-based trio Sweet Cecilia recently received a Grammy nomination for their scrumptious, beautiful album, A Tribute to Al Berard. Produced by Grammy-winner Tony Diagle who has worked with BB King and others, the album is gloriously wrought. Sisters Laura Huval and Maegan Berard, along with first cousin, Callie Guidry, make up Sweet Cecilia. Playing and singing together from childhood, the threesome follows in the footsteps of their father and uncle, Al “Pyook” Berard, who encouraged them to pursue music. Their name — Sweet Cecilia — is a nod to their hometown, Cecilia, Louisiana, as well as the patron Saint of Musicians, St. Cecilia. Officially formed in 2011, Sweet Cecilia dropped their debut self-titled album in 2015. In 2017, they released their sophomore album, Sing Me a Story, Sweet Cecilia’s sound amalgamates aspects of Americana, Louisiana roots music, folk, rock, country aromas, and rock into tantalizing sonic confections chock-full of mesmerizing hues. The trio has performed at Festival International de Louisiane, Festival Acadiens et Creole, the legendary Breaux Bridge Crawfish Festival, Festival International, the French Quarter Festival, the New Orleans Jazz and Heritage Festival, as well as many more. Sweet Cecilia Encompassing seven-tracks, the album begins with “Tout Les Cadien Content,” delivering tasty tinctures of Louisiana bog tints merged with relishes of pop dynamics. Laura struts her grand vocal prowess, imbuing the lyrics with exhilarating timbres and alluring flair. Entry points include “Dans La Louisiane,” rife with opulent harmonies and velvety textures. A personal favorite is “Fais Do Do Waltz,” brimming with Maegan’s luminous guitar and the delectable purr of a fiddle. The undulating rhythm of the tune is topped by flawless vocals. “Saute La Barriere” (Jump the Fence) travels on indulgent, blues-flavored colors and then transitions into a tune seething with country and alt-rock elements. A down and dirty guitar solo from Meagan gives the tune tasty foggy-bottom brio. On the final track, “Sing Me A Song,” the threesome parade their stellar chromatic fusion as their voices unite in exquisite a cappella. On A Tribute to Al Berard, Sweet Cecilia offers music at once elegant, intimate, and mesmerizing — a marvelous showcase of superb talent. Follow Sweet Cecilia Website | Facebook | Instagram | Twitter | Spotify
https://medium.com/pop-off/sweet-cecilias-a-tribute-to-al-berard-is-up-for-a-grammy-award-edd4dda5d006
['Randall Radic']
2020-12-19 21:16:13.074000+00:00
['Sweet Cecilia', 'A Tribute To Al Berard', 'Music', 'Grammys', 'Americana Music']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#a5c5
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash']
‘Like Being Grilled Alive’: The Fear of Living With a Hackable Heart
The first cardiac device I had was a pacemaker, implanted when I was nine years old. Though pacemakers and ICDs have overlapping patient demographics and are sometimes bundled in the same device, they have drastically different functions. Pacemakers help a patient’s normal heart rhythm cycle, while ICDs are tiny defibrillators meant to terminate dangerous arrhythmias and prevent cardiac arrest. In everyday life, defibrillators wait in hospitals and public spaces (gyms, churches, movie theaters) for disaster to strike — they are tools you seek out in an emergency. But an ICD brings the emergency response to you. It is watchful, an active listener. I think of a pacemaker as a heartbeat assistant; an ICD is an arrhythmia assassin. For as long as I’ve had one, I’ve been acutely aware that a pacemaker is a sensitive machine and can be derailed by plenty of things: airport security; laser tag vests; the seats in 4D amusement park rides; store security towers; cellphones; and still, somehow, microwaves. All of these things could disrupt the pacemaker, reprogram it, even stop it cold. As a child in the grocery store, I ran through the theft towers quickly, like I was trying to shoplift. I sat on the sidelines while friends ripped through laser tag arenas at birthday parties. Fewer than two years into post-9/11 hysteria, I panicked as a nine-year-old when a TSA agent came toward me with a security wand. I bolted, running farther into the terminal at Boston’s Logan Airport. I only made it a few yards before I was stopped by a knee to my chest, a muscled agent pulling me to the ground. My panic had made me into an apparent security threat. Doctors also posed a risk to my new device. During regular office checkups, ominously called “interrogations,” they would place a large magnetic wand over the pacemaker to take control of it. Between in-office interrogations, every three months, my physicians mandated that I do “home monitoring,” which involved a complicated and archaic process. I would hook myself up to a transmitter box that would screech out a dial-up tone to a stranger sitting in a call center somewhere via the receiver of a landline phone. And just like in-office interrogations, I needed to place a heavy round magnet over the device. Because a heavy magnet disrupts a pacemaker, I would sit in a wave of dizziness and nausea while a distant tech received the information. The whole process often lasted 15 or 20 minutes. When it was done, I would sit back in the kitchen chair, spent, waiting for the blood to return to my head. I don’t have to do this anymore. Remote-monitoring pacemakers were first sold to the general public around 2007; currently, the industry standard for remote monitoring involves routers paired via Bluetooth to wireless-enabled cardiac devices. These routers sit in a patient’s bedroom and run constantly, pulling data at regular intervals and transmitting it straight to their doctor via the internet. No phone calls and no magnets involved. Ideally, a patient never even knows their data is being collected. Clinically, the benefits of remote monitoring are twofold: The patient doesn’t have to enter a medical setting to be monitored, which reduces the likelihood of iatrogenic disease — illness caused by the interference of the medical system. At the same time, doctors get more data than they’ve ever had access to, allowing them, ideally, a window to disease prevention. (I, along with many other patients, take issue with the second proposition, given that we cannot access our own data; there’s a substantial activist movement toward data liberation that includes cardiac patients who have fought for more than a decade to gain access to the information generated by wireless-enabled pacemakers and ICDs.) “[The benefits of remote monitoring have] been held up over the years with just being able to diagnose something early,” said Dr. Leslie Saxon, a cardiologist and electrophysiologist who runs the Center for Body Computing at the University of Southern California. In 2010, Saxon led a study in partnership with device manufacturer Boston Scientific that found improved survival rates for patients who were monitored with remote monitoring, as compared with patients who were only followed with periodic in-clinic visits. “We also learned that we could learn how to program and make these devices a lot better if we were looking at all this data all the time,” she said. But as remote monitoring has become more widespread, concerns about the cybersecurity of the practice have only grown. Since 2011, the FDA has issued at least 11 warnings and many recalls on pacemakers and ICDs over concerns relating to cybersecurity and safety. This includes the 2017 notice for St. Jude devices that I found just before my surgery. The security defect affected at least a half-million patients and was ultimately resolved by a software patch sent directly to their remote monitors. Manufacturers like Medtronic often advise that patients keep their monitors turned on and connected so this sort of patch or upgrade can be delivered. But patches, often quietly sent to the devices, can leave patients in the dark: There is no streamlined process to let patients know when a vulnerability has been identified in their specific device or when a patch might be on its way. And researchers have argued that retroactive patches are no replacement for baked-in security. “The main concern is if vendors continuously rely on reactively resorting to pushing patches instead of securing their devices by design,” Fotis Chantzis, a security engineer who used to hack medical devices for a major health care institution and the lead author of Practical IoT Hacking: The Definitive Guide to Attacking the Internet of Things, told OneZero. “Usually these patches fix a particular vulnerability,” he continued, “but keep in mind that there is also this view of the security community that every bug can potentially be exploited given the right circumstances.” Device companies and doctors are often quick to insist that the cybersecurity concern is overblown. For years, they’ve maintained that while the routers can communicate with and gather data from patient devices, they can’t actually control the devices or deliver reprogramming directives. Dr. Rob Kowal, chief medical officer for cardiac rhythm and heart failure at Medtronic, told OneZero, “[Remote programming is] not possible,” at least with his company’s current home routers. There are two kinds of connections involved in remote monitoring: the connection from the patient’s implanted device to the router, which is often Bluetooth, and the connection from the router back to the data portal seen by the physician, which can use anything from a home Wi-Fi network to a hardline Ethernet cable or a phone line. Manufacturers insist that these channels have now been made secure. But many related FDA warnings have warned that hackers could, in fact, assume control and reprogram a patient’s device. Researchers and white hat hackers have demonstrated that the connections from the device to the router and from the router to the data portal are exploitable. Hackers have made headlines over the past decade-plus by exposing vulnerabilities in pacemakers and ICDs from every major developer, including St. Jude’s (now Abbott), Medtronic, and Boston Scientific. In 2018, researchers Billy Rios and Jonathan Butts from cybersecurity firm Whitescope demonstrated that they could hack into both cardiac devices and insulin pumps built by Medtronic, with potentially deadly results: They could shock a patient’s heart into cardiac arrest or administer a lethal amount of insulin. They told Wired that the devices lacked basic security functions: Medtronic’s MiniMed line of insulin pumps used radio frequencies that were easy to figure out, and there was no encryption on communications between the pumps and their remote controls. Rios and Butts also discovered that the company’s pacemakers didn’t use code signing, a standard security function that authenticates the legitimacy of things like software updates. Bill Aerts, Medtronic’s former director of product security until 2016, is now the executive director at the Archimedes Center for Healthcare and Device Security at the University of Michigan, which was founded by the researcher who, in 2008, co-authored the first major paper on cardiac device security. “Like anything else,” Aerts told me, the level of security built into such devices “was a matter of demand and costs.” He went on to say, “It took a while to educate the engineering community about these risks… Then the boss says, ‘No, that’s going to cost too much to add that extra functionality [security features].’ And so that took a while to get people to believe that, yes, it’s worth investing in.” The company took more than a year and a half to respond to the security concerns flagged by Rios and Butts and was apparently reluctant to offer solutions. “They are more interested in protecting their brand than their patients,” Rios told CNBC at the time. In an article from CBS News, Butts put it bluntly: “We’ve yet to find a device that we’ve looked at that we haven’t been able to hack.”
https://onezero.medium.com/i-live-with-a-digital-security-threat-inside-my-body-ca6b9da0b316
['Jameson Rich']
2020-11-19 17:25:57.149000+00:00
['Technology', 'Health', 'Icd', 'Internet of Things', 'Cybersecurity']
6 Best Maven Online Courses for Beginners in 2021
6 Best Maven Online Courses for Beginners in 2021 These are the best online courses to learn Apache Maven for Java developers from Udemy, Pluralsight, and other online learning portals. Hello guys, if you want to learn Maven and looking for the best Maven courses then you have come to the right place. Earlier, I have shared free Maven and Jenkins courses and in this article, I will share the best courses to learn Maven, an essential tool for Java developers. The Apache Maven or commonly known as just “Maven,” is an essential tool for Java Programmers. It allows you to build your project, manage dependencies, generate documentation, and a lot more. I can vouch for Maven’s usefulness because I have come from the pre-Maven world of Software development, where you need to manage all the JAR files required by your project. It may seem easy to you that just download the JAR file but it’s not so easy in practice. For example, you added a new library in your project say Spring framework which also needs log4j but you thought log4j is already there so you didn’t do anything, only to realize that your application is not starting anymore and throwing long and convoluted errors. This can happen because of version mismatch like Spring needed a higher version of log4j than available in your project. This is just a tiny bit of an example, which shows how manually managing dependencies can create nightmares. Maven took away all those pain by not only automatically downloading those JAR files for you but also created a central place, known as Maven repository to store those JAR files for better management. Btw, Maven is not just a dependency management tool, it’s in fact much more than that. The most significant advantage of using Maven is the following convention, which makes software development easy. In a Maven project, you know where is your source code, where is your test code and where are your resources. You don’t need to spend countless hours going through a large ANT build file to figure out how exactly is your artifact is created. Because of these useful qualities, the Java ecosystem adopted Maven generously. Most of the open-source project is a Maven project, which makes it easy for developers to understand them and contribute better. 6 Things Java Developers should Know about Maven If you still have doubts about learning Maven, here is some of the vital information about Maven which will help you to realize it’s valued as the most popular build tools for Java developers: 1. Maven has almost 70% of the market, 20% Gradle, and 10% ANT. 2. A maven is a build tool like make or ANT, take your source code and generate a JAR file, a distributable library. It can also run tests and generate Javadoc when you build your software. It also helps in build-automation. It allows each developer to kick-off the same build to make the same deployable. 3. Maven is also a Dependency management tool — we build an application that requires a third-party library. Maven helps to bring those JAR and keep them up-to-date 4. Maven command-line tool — you can use maven from the command prompt, but you can also use Maven from IDE, also like Eclipse or IntelliJ IDEA. 5. Maven as a project management tool — It helps to tell you about what is being used in your project. It can also hold an artifact in a repository like Nexus. 6. Maven provides a standardized approach to building software, a consistent approach like source and test directory, build artifacts as it promotes “Convention over Configuration.” You can further read my post 10 Maven things Java developer should learn to learn more about Maven and here is a snapshot of the common Maven lifecycle in Java application: 5 Best Maven Courses for Java Developers Even though Gradle is making inroads, 70% market is still using Maven, and that’s a big enough reason to learn Maven. If you also think so, here are some of the best courses to learn Apache Maven online by yourself. In this course, you will learn about the Maven Build Lifecycles, how to use Maven to build and package Java projects, understand how to use Maven with popular alternative JVM languages, including Groovy, Kotlin, and Scala, etc. I am a big fan of John Thompson, having attended his Spring 5 course, Spring 5: Beginner to Guru I have hooked to his teaching style and information delivery. When I found that John has a Maven course, I immediately bought it even though I know Maven and worked on several Maven projects, including a multi-module Maven project. Though I wasn’t disappointed, and the course helps me to fill some of the gaps I had in my learning. It’s particularly useful if you are coming with no Maven experience as information density is perfect from a beginner’s perspective. Here is the link to join this course — Apache Maven: Beginner to Guru You will also learn to configure Maven to run your unit and integration tests written in JUnit 3, JUnit 4, JUnit 5, TestNG, and Spock framework generate source code from XML and JSON Schemas, and leverage annotation processing at compile time for Project Lombok and Mapstruct. The course also covers the Apache Maven plugin system and teaches you how plugins are used in the build lifecycle. Finally, you will also learn to build Spring Boot applications with Apache Maven, learn about the Spring Boot Maven plugin, deploying project artifacts to Maven repositories, and how to develop multi-module Maven projects. In short, a perfect course to master Maven and ideal for both intermediate and senior Java developers.
https://medium.com/javarevisited/6-best-maven-courses-for-beginners-in-2020-23ea3cba89
[]
2020-12-14 06:37:41.003000+00:00
['Tools', 'Software Development', 'Maven', 'Java', 'Programming']
Game B, Game Ant, and the Open Sourced Religion Alternative
Game B, Game Ant, and the Open Sourced Religion Alternative This is how we end war. This article is going to be a little bit complicated for folks who haven’t followed along, but I think once the Handwaving Freakoutery (HWFO) readers are caught up to speed, they’ll be intrigued by it. Herein, we’re going to critique a dialog between myself and Jordan Hall about something he and his think-group call “Game B,” which is curiously similar yet importantly divergent from the Culture War Analysis material of the past two years on HWFO. After some extensive dialog, I think his group have many things mostly right but some things vitally slightly wrong. Let’s see how, and discover where that leads us. And then let’s end war, because that would be awesome. Background Jordan Hall is an intelligent, semi famous guy in nerd circles who made his hay early in life by co-founding DivX, Inc. He’s a Medium writer going back to 2014 with an impressive list of think pieces. He hit Intellectual Dark Web Adjacent circles with his Deep Code publication here, most particularly Situational Assessment 2017: Trump Edition, which peeled apart some concepts he calls the “Blue Church” and the “Red Religion.” Very similar to our mode of thinking here on HWFO when we do Culture War Analysis. The Dialog The general framework of the discussion goes like this. HWFO analysis views mankind as a group of animals with extra white space in our brains, into which we can install behavioral software. Some of our behavior is genetic, but some is dictated by that software. And since the software isn’t genetic, it can evolve much faster. Our great leaps of the past several thousand years have been made by (perhaps unintentionally) fine tuning our software to look more and more like the hard-coded behavior of the ants, since the ants objectively run the planet today. Agriculture, social hierarchies, division of labor, civil infrastructure, self sacrifice, war, genocide, and slavery are all ant-things, and ants make up a fifth of all animal biomass on the planet. Follow the “ants” link below for a refresher. Be clear, we’re still not beating the ants, and we’ve only come close by behaving like them. Game B theorists call this software “Game A,” and are generally of the view that following it to its natural conclusion will create an existential disaster, either through global thermonuclear war, environmental collapse, or something else of similar magnitude. The “us versus them” nature of Game A creates coordination problems, and our level of technology and overall population burden will be planet-destroying without better ways to coordinate. They decided to convene and work towards creating an alternate system. A “Game B.” My dialog with Jordan, initially brokered by Peter Limberg (Intellectual Explorers Club) began on Facebook and pivoted over to Medium, in a series of responses under this HWFO article: The dialog is long, but begins here: And it columnated in an 80-minute dialog on Peter’s podcast: Throughout the course of the dialog, I took notes on where Jordan and I diverged in our thinking. The Critique Jordan views Game A as inherently fragile. The Game B theorists use this language a lot. I don’t. I view Game A as inherently strong, outside of possible boundary conditions that create an existential threat to mankind itself. This is a huge and important disagreement between our viewpoints. It seems to me that his viewpoint is based on a hyper focus on Game A’s intermittent systemic failures, such as the ’08 financial crisis, the collapse of the Mayan Empire, or similar. My viewpoint is based on the idea that Game A is a product of memetic evolution over thousands of years of trial and error, a Darwinism of ideas. I view the process by which Game A has evolved to be a more robust system design process than any process a group of intellectuals could cook up. I infer that Jordan doesn’t think mankind is emulating ants, or at least is not eventually going to be, because we are evolving so fast. Our rate of memetic evolution is limited not by our genetic interactions but by our “bandwidth” to process ideas. As the bandwidth expands, we evolve faster, which is how we got this far towards the ant solution in such short a time. Our evolution is happening memetically instead of genetically. This reminds me a lot of the comedy horror book “Fluke, or I Know Why the Winged Whale Sings,” by Christopher Moore. go here for the whaleyboys Where I break from him here, I think, is that we are evolving towards a specific maximum, and that maximum is the ants. We won’t get past the ants until we get to the ants, and the thing we’re trying to avoid, I think, is becoming ants. That stage is necessarily horrific. He sees us zooming somewhere else as our rate of memetic evolution advances. I see us moving faster towards becoming ants. This is why I call it Game Ant, and why I think if anyone is going to alter our course of memetic evolution, they better do so quickly. We disagree about exponential growth as it applies to these evolutionary systems. In my mind, and he will agree at the surface, nothing grows exponentially. Moore’s Law will eventually reach a limit, and even if it doesn’t, the number of interactions within the human species, what he calls “bandwidth,” is necessarily limited by the number of brains on the planet that can process those interactions. That limit is fixed by the population, until / unless Elon Musk succeeds at computer-brain interfaces or the AIs become self-aware. But at that point we wouldn’t be talking about human evolution at all, we’d be talking about machine evolution, and it seems almost pointless to me to speculate what that would even look like. He seemed to think that military advances are happening along that Moore’s Law rate as well, and we disagree on that. If we are going to measure military destructiveness, the scale starts at sharpened stick and ends at the H bomb, and any developments since then have simply been to develop ways to kill smaller numbers of people more efficiently, not destroy planets. In my mind we reached that “existential threat” half a century ago, and the only thing that could make it any worse is nuclear proliferation. Although as a gun proponent, and a “good fences make good neighbors” proponent, I’m not convinced that more nuclear proliferation wouldn’t make the world less violent. “Mutually assured destruction,” while it certainly breeds a kind of angst and fear that I grew up with as a child of the 1980s, seems generally to have worked. If we were to evaporate the nukes, we’d probably get ourselves into more world wars of the style not seen since the bomb hit Nagasaki. I will admit up front that my position is arguable, and isn’t popular. I think one of the major differences with HWFO thinking and Game B thinking, is that I view indoctrination (Jordan prefers “enculturation”) as an essential element of Game A. Wherever Game A might stumble into a Nash Equilibrium that is suboptimal, such as the killing of the whales, that can be bridged by a cooperative encultured behavior. The Golden Rule is the chief example, but there are others. To me it seems like the drive of any project to get us past these existential threats should be to run the same program we’ve always run. Better enculturation beats worse enculturation, so identify the encultured elements that will bridge these gaps and avoid these existential crises, and then identify the best, cleanest, most effective way to roll them out. I tend to think religion, or religion like analogues, work very well here. The sense I got from Jordan is that he dislikes the word “indoctrination” entirely, and wants to, in the words of Kevin Flynn from Tron, “build a better system.” He views Game A as the mother of this new system, and Game B as the child. He seems to think that my approach, of spreading new enculturations to bridge gaps in Game A, can’t work because there will be too many gaps to bridge. He views it like the legal system, where laws are constantly growing in complexity to plug holes in behavior, but the laws create new holes in behavior, and the thing becomes unmanageable. He likens this effect to plugging holes in a dike, where each plug creates two new holes. We disagree heavily about this. When you have an indoctrinated behavioral script telling you the difference between ‘right’ and ‘wrong,’ then you don’t have to know a giant body of if/then statements like computer code, or legal code. You have certain key principles you default to that are applicable for a wide range of scenarios, which can just as easily be applied to new unforeseen scenarios. I think his analysis here is erroneous. The reason why core ethical principles work is because they’re inherently scalable to emerging environments. If they weren’t, we wouldn’t have ethics at all, we’d just have law. Law is constantly expanding to plug gaps in a dike, but ethics are simple and mostly universal. Most of our body of law could be reduced to “don’t be an asshole,” provided we had a culture wide understanding of what “asshole” meant. In my mind, our current problem as a species is that the environment has changed and is changing, but the morality doesn’t have an update mechanism, largely because religion lacks a such a mechanism. The fix in my mind is not to abandon Game A, it’s to adjust the religious circulatory system inside Game A to be able to advect new antibodies against bad equilibria, while getting the wise to spend time cracking out new antibodies. And that, in my mind, is literally all that needs to be done to fix it. The next portion of the podcast dialog, we drilled on some semantics which are important. In Jordan’s terminology, morality is simulated thinking, following rote doctrine which was written by wise people, in order to affect behavior. Ethics is actual thinking, having the wisdom to know why you should behave a certain way. From his point of view, a more rapidly changing environment will create a situation where morality no longer works, because there will be too many new cases erupt where there’s no clear indication of which behavior is appropriate. This goes back to his analogy to law. In his view, the only way for people to deal with the coming scenario he envisions, where the social environment is changing at an ever-increasing rate, is to pivot everyone from morality to ethics. From rote behavior to wisdom. This is the nature of his project. While I find that notion beautiful, I don’t think it’s realistic at scale. I think his think-group, who are all fabulously intelligent, considerate thinkers, have a huge blind spot here. I think “everyone should be wise” is a pipedream, and I think human history and the evolution of these societal management memes to date reinforces my opinion. The way this has always functioned, is a small number of wise people wrote down behavioral scripts for a large number of less wise people to follow. There is a “Wisdom to Morality Pipeline” that’s built into our societal frameworks, which has existed since we began labor specialization. Let the farmers farm and the wisemen be wise, because the farmer either doesn’t have the genetic capability, or the time, or the motivation, to do the wisdom stuff. Managing this pipeline is the literal function of religion within Game A, and has been for thousands if not tens of thousands of years. It’s why our brains are wired for it. they’re also wired for rock and roll Jordan doesn’t seem to think this will work. He points to literal war, and what he calls the “war on sensemaking,” as being different aspects of the same overall thing. Here on HWFO we’ve simply been calling that “Culture War.” In his mind, wars of bullets and wars of media are attempts by one gaggle of wisemen to lop off the heads of another gaggle of wisemen, and hijack their base of moral followers. And as I understand it, he seems to fault the Wisdom to Morality Pipeline itself. I think this pipeline is essential to maintain, as a feature of a culture, because “everyone should just be wise” isn’t realistic. I also think that it’s often not the wisemen driving the boat. It’s the abstracted concept itself driving the boat, be that concept “third wave feminism” or “communism” or “capitalism” or “white nationalism.” Even the wisemen are acting out scripts, when it comes to culture war. They’re mostly just tools of the scripts themselves. We covered that here: I think you can end war a different way. Instead of scrapping the Wisdom to Morality Pipeline and trusting that everyone can be wise, you build a system where the pipelines come from the ground up, and can change an adapt. A democratized, evolving, morality pipeline. An open sourced religion. Which is exactly what Social Justice has done. They just built it on an unstable and poorly thought out seed crystal. We covered that here: Folks in this intellectual space should not spend a tenth of a second bashing social justice, but rather should spend days figuring out how the social justice crowd completely unwittingly built an open sourced religion. They’re a (currently failing) beta test for a concept that could literally end war, instead of constantly waging it like they do now. From my point of view, they look like they built the world’s first torque wrench and are using it to bash holes in cars while I stand at the edge of the garage shouting to bystanders, “wow, folks, look! A torque wrench!” Further, social justice has rediscovered a culture war route that most culture warriors have forgotten — memetic hijacking. The Christians spread through Europe by mapping their teachings onto Pagan teachings, and the Social Justice folks are doing the same thing now to Christianity. “Privilege” = original sin, “problematic” = heresy, “cancellation” = excommunication, and such. “Woke” = saved. By hijacking an established memetic system, they made their indoctrination set easy to adopt by the target group. I’m not even sure that they intended it to work out this way. But because the system was being developed on tumblr and a cloister of academic journals, it’s rooted in a kind of indoctrinational Darwinism, where a giant internet soup of social justice ideas brewed, and the most alluring and spreadable bits rose to the top and dominated the conversation. This mirrors the open sourced software engine, or the free market, or the Serengeti Plain, where the most effective ideas or companies or organisms grow to dominate. They built an evolutionary idea marketplace. Open Sourced Religion The functional solution I envision looks like this. Steal the “evolutionary idea marketplace” functions that have given rise to Social Justice, but seed them with a different seed crystal. Rationalism instead of postmodernism. Identify the existing cultural structures that work historically, instead of wishing de-facto dystopian cultural structures worked. Identify where these existing cultural structures appear within the current world religions, as well as within Social Justice, economics, and psychology. Solidify them into a unifying principle. A meta-golden rule. Write it down in a book, with a black cover. I’m partial to writing “Don’t Panic” on the front of it. Package the meta-golden rule and distribute it throughout the evolutionary idea marketplace. Crowdsource the project of showing how this meta-golden rule exists at the heart of all world religions, social justice, and contemporary philosophical ethics frameworks. This book becomes a meme that can hijack any existing culture. Allow the behaviors exemplified within it to spread via a new Wisdom-Morality Pipeline from the ground up instead of the top down. A guru-less religion. An adaptable religion. A form that subsumes all religions, and teaches them all that they’re all saying the same thing. A form that spreads by the internet beyond national boundaries. A form whose wisemen cannot be lopped off. A form that ends war.
https://medium.com/handwaving-freakoutery/game-b-game-ant-and-the-open-sourced-religion-alternative-ce1103917ac5
['Bj Campbell']
2020-01-07 20:09:57.429000+00:00
['Society', 'Culture War', 'Culture', 'War', 'Religion']
ML05: Neural Network on iris by Numpy
(7) Testing print("Epochs = 15") # Run both epoch=15 & epoch=100 print('weights = ', weights, 'bias = ', bias) print("y_test = {}".format(y_test)) activation(x_test, weights, bias) Figure 9: Testing result of epochs = 15 Figure 10: Testing result of epochs = 100 If we set the decision boundary at 0.5, then we get 100% accuracy in both epochs = 15 & 100. The first 10 predictions of epochs = 15 are between 0.46~0.49, while the first 10 predictions of epochs = 100 are between 0.23~0.30. The last 10 predictions of epochs = 15 are between 0.57~0.63, while the last 10 predictions of epochs = 100 are between 0.64~0.81. As epochs rises, values of the two flower group become more distance.
https://medium.com/analytics-vidhya/ml05-8771620a2023
['Morton Kuo']
2020-12-23 16:41:30.784000+00:00
['Machine Learning', 'Neural Networks', 'Deep Learning', 'Numpy', 'AI']